In this paper, we present a low profile, wideband dual-polarized suspended patch antenna for LTE700/LTE850/CDMA850/GSM900 applications with high isolation. The proposed antenna element consists of two suspended patches excited orthogonally through modified cross-slot. It is theoretically and experimentally verified that this modification significantly widens −15 dB impedance bandwitdh and improves port isolation between polarizations. Experiments reveal in parallel with conducted simulations that the antenna is capable of exhibiting 37% −15 dB impedance bandwidth (688-1000 MHz) and more than 35 dB port isolation over operating frequency band. In principle planes, the antenna provides symmetric broadside radiation patterns at boresight with half-power beamwidths of 55.32º-65.61º and 61.79º-66.84º. Gain of the antenna varies within 7.3-8.4 dBi range at measured frequencies. The size of the antenna is 380 × 380 × 43.6 mm3 which makes it very suitable to integrate into commercially deployed base stations. Geometry, feeding mechanism, parametric studies and experimental results of the proposed antenna are presented throughout the paper.
本文提出了一种用于 LTE700/LTE850/CDMA850/GSM900 应用的低剖面、宽带双极化悬浮贴片天线,具有很高的隔离度。拟议的天线元件由两个悬浮贴片组成,通过改进的十字槽正交激励。经理论和实验验证,这种改进大大拓宽了 -15 dB 阻抗带宽,并提高了极化间的端口隔离度。实验与模拟同时显示,该天线能够在工作频段内显示 37% -15 dB 阻抗带宽(688-1000 MHz)和超过 35 dB 的端口隔离度。在原理平面上,该天线在内径处提供对称的宽边辐射模式,半功率波束宽度分别为 55.32º-65.61º 和 61.79º-66.84º。在测量频率下,天线的增益在 7.3-8.4 dBi 范围内变化。天线尺寸为 380 × 380 × 43.6 mm3,非常适合集成到商用基站中。本文通篇介绍了拟议天线的几何形状、馈电机制、参数研究和实验结果。
{"title":"A Low-Profile High Isolation Wideband Dual-Polarized Antenna for Sub-1 GHz Base Stations","authors":"E. A. Miran, M. Çiydem","doi":"10.35378/gujs.1279556","DOIUrl":"https://doi.org/10.35378/gujs.1279556","url":null,"abstract":"In this paper, we present a low profile, wideband dual-polarized suspended patch antenna for LTE700/LTE850/CDMA850/GSM900 applications with high isolation. The proposed antenna element consists of two suspended patches excited orthogonally through modified cross-slot. It is theoretically and experimentally verified that this modification significantly widens −15 dB impedance bandwitdh and improves port isolation between polarizations. Experiments reveal in parallel with conducted simulations that the antenna is capable of exhibiting 37% −15 dB impedance bandwidth (688-1000 MHz) and more than 35 dB port isolation over operating frequency band. In principle planes, the antenna provides symmetric broadside radiation patterns at boresight with half-power beamwidths of 55.32º-65.61º and 61.79º-66.84º. Gain of the antenna varies within 7.3-8.4 dBi range at measured frequencies. The size of the antenna is 380 × 380 × 43.6 mm3 which makes it very suitable to integrate into commercially deployed base stations. Geometry, feeding mechanism, parametric studies and experimental results of the proposed antenna are presented throughout the paper.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":"181 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139355722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, the distribution’s reliability and hazard functions, as well as the population parameters, are estimated for the length biased weighted Lomax (LBWLo) based on progressively Type II censored samples. The maximum likelihood and Bayesian methods are implanted to get the proposed estimators. Gamma and Jeffery's priors serve as informative and non-informative priors, respectively, from which the posterior distribution of the LBWLo distribution is constructed. To obtain the Bayesian estimates, the Metropolis-Hasting (MH) algorithm is also used. We obtain asymptotic confidence intervals based on the Fisher information matrix. Using the sample produced by the MH method, we construct the intervals with the highest posterior densities. A numerical simulation research is done to evaluate the effectiveness of the approaches. Through Monte Carlo simulation, we compare the proposed estimators in terms of mean squared error. It is possible to get coverage probability and average interval lengths of 95% .The study's findings supported the idea that, in the majority of cases, Bayes estimates with an informative prior are more appropriate than other estimates.
{"title":"Classical and Bayesian Inference for the Length Biased Weighted Lomax Distribution under Progressive Censoring Scheme","authors":"Amal S. HASSAN, Samah A. ATİA, Hiba Z. MUHAMMED","doi":"10.35378/gujs.1249968","DOIUrl":"https://doi.org/10.35378/gujs.1249968","url":null,"abstract":"In this study, the distribution’s reliability and hazard functions, as well as the population parameters, are estimated for the length biased weighted Lomax (LBWLo) based on progressively Type II censored samples. The maximum likelihood and Bayesian methods are implanted to get the proposed estimators. Gamma and Jeffery's priors serve as informative and non-informative priors, respectively, from which the posterior distribution of the LBWLo distribution is constructed. To obtain the Bayesian estimates, the Metropolis-Hasting (MH) algorithm is also used. We obtain asymptotic confidence intervals based on the Fisher information matrix. Using the sample produced by the MH method, we construct the intervals with the highest posterior densities. A numerical simulation research is done to evaluate the effectiveness of the approaches. Through Monte Carlo simulation, we compare the proposed estimators in terms of mean squared error. It is possible to get coverage probability and average interval lengths of 95% .The study's findings supported the idea that, in the majority of cases, Bayes estimates with an informative prior are more appropriate than other estimates.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":"31 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139356327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Capacity planning should be performed to balance investment costs and benefits of investing to meet the current and future demand in intensive care units. Having a high capacity to increase patient admission will lead to unutilized capacity in some periods, thereby increasing costs. On the other hand, patient admission requests from inborn and transported patients might be rejected due to lack of equipment. It should be considered in terms of cost-effectiveness and patient health; therefore, optimal equipment capacity must be determined. In this study, the optimal capacity planning problem has been considered for the neonatal intensive care unit of a hospital adopting the simulation-optimization approach. A discrete event simulation model is proposed for a neonatal intensive care unit in Adana, Turkey. Then, the optimization model identified the optimal numbers of incubators, ventilators, and nitric oxide devices to maximize equipment efficiency and minimize total inborn patient rejection and transport ratios. Three different resource allocations are presented, and the best is obtained from these three objectives as 72 incubators, 35 ventilators, and three nitric oxide devices. The application results obtained have revealed that the rejection and transport rate, which is found to be 1.12% in the current situation, can be reduced to 0.2% with different numbers of equipment and that equipment efficiency can be achieved with optimal numbers of equipment. The results of the study can help the decision-makers when minimum transport and rejection ratios are critical which almost intensive care units are required. Furthermore, the proposed simulation-optimization model can be adapted to different neonatal intensive care units having the same characteristics.
{"title":"Optimal Equipment Capacity Planning in the Neonatal Intensive Care Unit with Simulation-Optimization Approach","authors":"Mufide Narli, Yusuf Kuvvetli̇, Ali Kokangül","doi":"10.35378/gujs.1247829","DOIUrl":"https://doi.org/10.35378/gujs.1247829","url":null,"abstract":"Capacity planning should be performed to balance investment costs and benefits of investing to meet the current and future demand in intensive care units. Having a high capacity to increase patient admission will lead to unutilized capacity in some periods, thereby increasing costs. On the other hand, patient admission requests from inborn and transported patients might be rejected due to lack of equipment. It should be considered in terms of cost-effectiveness and patient health; therefore, optimal equipment capacity must be determined. In this study, the optimal capacity planning problem has been considered for the neonatal intensive care unit of a hospital adopting the simulation-optimization approach. A discrete event simulation model is proposed for a neonatal intensive care unit in Adana, Turkey. Then, the optimization model identified the optimal numbers of incubators, ventilators, and nitric oxide devices to maximize equipment efficiency and minimize total inborn patient rejection and transport ratios. Three different resource allocations are presented, and the best is obtained from these three objectives as 72 incubators, 35 ventilators, and three nitric oxide devices. The application results obtained have revealed that the rejection and transport rate, which is found to be 1.12% in the current situation, can be reduced to 0.2% with different numbers of equipment and that equipment efficiency can be achieved with optimal numbers of equipment. The results of the study can help the decision-makers when minimum transport and rejection ratios are critical which almost intensive care units are required. Furthermore, the proposed simulation-optimization model can be adapted to different neonatal intensive care units having the same characteristics.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":"54 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139358669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dila Çi̇ğdem, Bilge Toprak Karakaya, Duru Deği̇mli̇, Meltem GÖNÜLOL ÇELİKOĞLU, Yavuz Öztürk
In this study, a novel magnetic system that allows observing quantized conductance for undergraduate and graduate laboratories is presented. Bending of a magnetic cylindrical beam, like a cantilever, was controlled by an electromagnet to provide contact between needle type electrode and a plane of conductor. It is shown that by using the beam bending, it is possible to displace an object on the beam in nanometer and micrometer scale. The measured quantized conductance results prove that the designed system can be used for demonstration of quantized conductance.
{"title":"OBSERVING CONDUCTANCE QUANTIZATION by a NOVEL MAGNETIC CONTROL SYSTEM","authors":"Dila Çi̇ğdem, Bilge Toprak Karakaya, Duru Deği̇mli̇, Meltem GÖNÜLOL ÇELİKOĞLU, Yavuz Öztürk","doi":"10.35378/gujs.1222023","DOIUrl":"https://doi.org/10.35378/gujs.1222023","url":null,"abstract":"In this study, a novel magnetic system that allows observing quantized conductance for undergraduate and graduate laboratories is presented. Bending of a magnetic cylindrical beam, like a cantilever, was controlled by an electromagnet to provide contact between needle type electrode and a plane of conductor. It is shown that by using the beam bending, it is possible to displace an object on the beam in nanometer and micrometer scale. The measured quantized conductance results prove that the designed system can be used for demonstration of quantized conductance.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46908279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mortality risks of important diseases such as cancer can be estimated using gene profiles which are high-dimensional data obtained from gene expression sequences. However, it is impossible to analyze high-dimensional data with classical techniques due to multicollinearity, time-consuming processing load, and difficulty interpreting the results. For this purpose, extreme learning machine methods, which can solve regression and classification problems, have become one of the most preferred machine learning methods regarding fast data analysis and ease of application. The goal of this study is to compare estimation performance of risk score and short-term survival with survival extreme learning machine methods, L2-penalty Cox regression, and supervised principal components analysis in generated high-dimensional survival data. The survival models have been evaluated by Harrell’s concordance index, integrated Brier score, F1 score, kappa coefficient, the area under the curve, the area under precision-recall, accuracy, and Matthew’s correlation coefficient. All results showed that survival extreme learning machine methods that allow analyzing high-dimensional survival data without the necessity of dimension reduction perform very competitive with the other popular classical methods used in the study.
{"title":"Survival Prediction with Extreme Learning Machine, Supervised Principal Components and Regularized Cox Models in High-Dimensional Survival Data by Simulation","authors":"Fulden CANTAŞ TÜRKİŞ, İ. Kurt Omurlu, M. Türe","doi":"10.35378/gujs.1223015","DOIUrl":"https://doi.org/10.35378/gujs.1223015","url":null,"abstract":"Mortality risks of important diseases such as cancer can be estimated using gene profiles which are high-dimensional data obtained from gene expression sequences. However, it is impossible to analyze high-dimensional data with classical techniques due to multicollinearity, time-consuming processing load, and difficulty interpreting the results. For this purpose, extreme learning machine methods, which can solve regression and classification problems, have become one of the most preferred machine learning methods regarding fast data analysis and ease of application. The goal of this study is to compare estimation performance of risk score and short-term survival with survival extreme learning machine methods, L2-penalty Cox regression, and supervised principal components analysis in generated high-dimensional survival data. The survival models have been evaluated by Harrell’s concordance index, integrated Brier score, F1 score, kappa coefficient, the area under the curve, the area under precision-recall, accuracy, and Matthew’s correlation coefficient. All results showed that survival extreme learning machine methods that allow analyzing high-dimensional survival data without the necessity of dimension reduction perform very competitive with the other popular classical methods used in the study.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46047966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Umut, Hakan Üstünel, Güven Çentik, E. Uçar, L. Öztürk
In this study, it was aimed to find out whether electroencephalographic (EEG) frequency bands can be used to distinguish people with obstructive sleep apnea (OSA) from those who do not have it. 11842 different cases taken from 121 patients suffering from OSA were combined with the case study of 30-person control group without sleep apnea. Apneas were highlighted at the respiration-record channels and EEG records which are concurrent with abnormal respiration cases were extracted from C4-A1 and C3-A2. Following that, they were examined with Fourier and Wavelet Transforms using a new software that was developed by us. The percentage values of Delta (0, 5-4 Hz), Theta (4-8 Hz), Alpha (8-13 Hz) and Beta (13-30 Hz) frequency bands were evaluated with the help of t-test and ROC Analysis to differentiate between apneas. The C3-A2 Beta (%) frequency level gave the highest distinguishing asset (AUC=0.662; p
{"title":"Distinguishing Obstructive Sleep Apnea Using Electroencephalography Records","authors":"I. Umut, Hakan Üstünel, Güven Çentik, E. Uçar, L. Öztürk","doi":"10.35378/gujs.1229166","DOIUrl":"https://doi.org/10.35378/gujs.1229166","url":null,"abstract":"In this study, it was aimed to find out whether electroencephalographic (EEG) frequency bands can be used to distinguish people with obstructive sleep apnea (OSA) from those who do not have it. 11842 different cases taken from 121 patients suffering from OSA were combined with the case study of 30-person control group without sleep apnea. Apneas were highlighted at the respiration-record channels and EEG records which are concurrent with abnormal respiration cases were extracted from C4-A1 and C3-A2. Following that, they were examined with Fourier and Wavelet Transforms using a new software that was developed by us. The percentage values of Delta (0, 5-4 Hz), Theta (4-8 Hz), Alpha (8-13 Hz) and Beta (13-30 Hz) frequency bands were evaluated with the help of t-test and ROC Analysis to differentiate between apneas. The C3-A2 Beta (%) frequency level gave the highest distinguishing asset (AUC=0.662; p","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49468964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Determining the minimum dominating set in connected graphs is one of the most difficult problems defined as NP-hard. In this problem, it is aimed to determine the important nodes that can influence all nodes via the minimum number of nodes on the graph. In this study, an efficient near optimal algorithm showing a deterministic approach has been developed different from the approximation algorithms mentioned in literature for discovering dominating set. The algorithm has O(n3) time complexity in determining the Dominating Set (DS). At the same time, the algorithm is a original algorithm whose solution is not random by using fundamental cut-set. The DS algorithm consists of 3 basic phases. In the first phase of the algorithm, the algorithm that constructs the special spanning tree (Karci Max tree) of the graph is developed. In the second phase, the algorithm that finds the fundamental cut sets using the Kmax spanning tree is developed. In the last phase, Karci centrality node values are calculated with fundamental cut set and by using these Karci centrality node values, an algorithm has been developed to identify DS nodes. As a result of these three phases, the dominance values of the nodes on the graph and the DS nodes are calculated. The detected Karci centrality node values give priority to the node selection for determining the DS. All phases of the developed DS and Efficient node algorithms were coded in R programming language and the results were examined by running on sample graphs.
{"title":"Efficient Algorithm for Dominatig Set In Graph Theory Based on Fundamental Cut-Set","authors":"Furkan Öztemiz, A. Karcı","doi":"10.35378/gujs.1243008","DOIUrl":"https://doi.org/10.35378/gujs.1243008","url":null,"abstract":"Determining the minimum dominating set in connected graphs is one of the most difficult problems defined as NP-hard. In this problem, it is aimed to determine the important nodes that can influence all nodes via the minimum number of nodes on the graph. In this study, an efficient near optimal algorithm showing a deterministic approach has been developed different from the approximation algorithms mentioned in literature for discovering dominating set. The algorithm has O(n3) time complexity in determining the Dominating Set (DS). At the same time, the algorithm is a original algorithm whose solution is not random by using fundamental cut-set. The DS algorithm consists of 3 basic phases. In the first phase of the algorithm, the algorithm that constructs the special spanning tree (Karci Max tree) of the graph is developed. In the second phase, the algorithm that finds the fundamental cut sets using the Kmax spanning tree is developed. In the last phase, Karci centrality node values are calculated with fundamental cut set and by using these Karci centrality node values, an algorithm has been developed to identify DS nodes. As a result of these three phases, the dominance values of the nodes on the graph and the DS nodes are calculated. The detected Karci centrality node values give priority to the node selection for determining the DS. All phases of the developed DS and Efficient node algorithms were coded in R programming language and the results were examined by running on sample graphs.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48515752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examined the corrosion resistance of the metallic paint-coated, uncoated, and damaged paint-coated form of the high-strength 6061 T6 Al alloy inside seawater. Solvent-based paint containing two different metallic pigments produced with an alkyd binder was produced and the coating of Al 6061 T6 alloy was made with this paint. To determine the course of corrosion electrochemical impedance spectroscopy method was used. Ecor and Rp values were calculated from potential and current change values. As a result, it was determined from the Ecor, Rp, SEM -EDX images, and Nyquist curves that the corrosion resistance of impact coatings was lower. The corrosion resistance of gold color paint substantially containing copper pigment was lower than that of silver color paint containing Al pigment was observed.
{"title":"Examining the Corrosion Behavior of The 6061-T6 Al Alloy Inside Seawater With Decorative Gold- And Silver-Color Coating","authors":"J. Erkmen, Benek Hamamci, A. Aydin","doi":"10.35378/gujs.1219180","DOIUrl":"https://doi.org/10.35378/gujs.1219180","url":null,"abstract":"This study examined the corrosion resistance of the metallic paint-coated, uncoated, and damaged paint-coated form of the high-strength 6061 T6 Al alloy inside seawater. Solvent-based paint containing two different metallic pigments produced with an alkyd binder was produced and the coating of Al 6061 T6 alloy was made with this paint. To determine the course of corrosion electrochemical impedance spectroscopy method was used. Ecor and Rp values were calculated from potential and current change values. As a result, it was determined from the Ecor, Rp, SEM -EDX images, and Nyquist curves that the corrosion resistance of impact coatings was lower. The corrosion resistance of gold color paint substantially containing copper pigment was lower than that of silver color paint containing Al pigment was observed.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49536947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The initial risk assessment, especially when using a risk score system, is the main step in a risk assessment process that comes after determining the scope of the risks and assessment. The Fine-Kinney method, a thorough approach to quantitative assessments to help keep risks under control, is commonly used in risk assessment. A risk score (RS) is determined using the standard version of Fine-Kinney by mathematically multiplication of probability (P), exposure (E), and consequence (C) parameters. The Fine-Kinney-based risk evaluation approach has the disadvantage of not accounting for the relationships among the risk parameters' interaction and determining the risk precedence of work-related hazards. Hence, a new hazard evaluation method for occupational health and safety (OHS) is required to lessen the adverse effects of rising dangers. In this paper, a novel approach is proposed for integrating Fine–Kinney-based occupational hazard evaluation and AHP-COPRAS for the energy distribution and investment sector under the Pythagorean fuzzy environment. A lifting equipment case study is used to demonstrate practicality and efficacy of the suggested integrated approach. To verify the novel method to risk assessment, a comparative study and sensitivity analysis are also provided. As a result, using the benefit of Pythagorean fuzzy sets, which more effectively express uncertainty, the integrated approach yields more logical conclusions for assessing work-related hazards in the energy distribution and investment sector.
{"title":"Fine-Kinney-Based Occupational Risk Assessment using Pythagorean Fuzzy AHP-COPRAS for the Lifting Equipment in the Energy Distribution and Investment Sector","authors":"Suleyman Recep Satici, Suleyman Mete","doi":"10.35378/gujs.1227756","DOIUrl":"https://doi.org/10.35378/gujs.1227756","url":null,"abstract":"The initial risk assessment, especially when using a risk score system, is the main step in a risk assessment process that comes after determining the scope of the risks and assessment. The Fine-Kinney method, a thorough approach to quantitative assessments to help keep risks under control, is commonly used in risk assessment. A risk score (RS) is determined using the standard version of Fine-Kinney by mathematically multiplication of probability (P), exposure (E), and consequence (C) parameters. The Fine-Kinney-based risk evaluation approach has the disadvantage of not accounting for the relationships among the risk parameters' interaction and determining the risk precedence of work-related hazards. Hence, a new hazard evaluation method for occupational health and safety (OHS) is required to lessen the adverse effects of rising dangers. In this paper, a novel approach is proposed for integrating Fine–Kinney-based occupational hazard evaluation and AHP-COPRAS for the energy distribution and investment sector under the Pythagorean fuzzy environment. A lifting equipment case study is used to demonstrate practicality and efficacy of the suggested integrated approach. To verify the novel method to risk assessment, a comparative study and sensitivity analysis are also provided. As a result, using the benefit of Pythagorean fuzzy sets, which more effectively express uncertainty, the integrated approach yields more logical conclusions for assessing work-related hazards in the energy distribution and investment sector.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42866825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development and launch of communication satellite projects pose significant challenges and costs. The expenses can range from several hundred million dollars, contingent on factors such as mission objectives, satellite system size and complexity including the launch vehicle, and ground infrastructure. Satellites must be designed to withstand harsh conditions in space, such as the extreme temperatures, radiation, and other hazards, while delivering reliable communication services to its users. However, once a satellite is launched, physical maintenance interventions become infeasible in the event of technical problems. Thus, reliability is a critical aspect for these expensive systems. This study aims to minimize the cost of a high-tech communication satellite by addressing design considerations that meet customer reliability requirements without exceeding power and redundant equipment limits. To achieve this goal, we propose an integer non-linear programming model in this research. To solve the satellite design problem, we adopt a two-stage solution approach. Conventional industrial practices in satellite design often involve iterative attempts to determine the redundancy level of onboard units based on customer reliability requirements. These processes rely heavily on the experience of design engineers who evaluate a limited number of alternatives to determine the number of redundant units, resulting in sub-optimal outcomes. In contrast, our proposed approach systematically handles the problem and yields optimal results. Our findings demonstrate that the proposed two-phase approach can achieve optimal redundancy levels within seconds.
{"title":"A Two-Phase Approach for Reliability-Redundancy Optimization of a Communication Satellite","authors":"Gazi University, T. Teti̇k, G. Das, B. Birgoren","doi":"10.35378/gujs.1186561","DOIUrl":"https://doi.org/10.35378/gujs.1186561","url":null,"abstract":"The development and launch of communication satellite projects pose significant challenges and costs. The expenses can range from several hundred million dollars, contingent on factors such as mission objectives, satellite system size and complexity including the launch vehicle, and ground infrastructure. Satellites must be designed to withstand harsh conditions in space, such as the extreme temperatures, radiation, and other hazards, while delivering reliable communication services to its users. However, once a satellite is launched, physical maintenance interventions become infeasible in the event of technical problems. Thus, reliability is a critical aspect for these expensive systems.\u0000\u0000This study aims to minimize the cost of a high-tech communication satellite by addressing design considerations that meet customer reliability requirements without exceeding power and redundant equipment limits. To achieve this goal, we propose an integer non-linear programming model in this research. To solve the satellite design problem, we adopt a two-stage solution approach. Conventional industrial practices in satellite design often involve iterative attempts to determine the redundancy level of onboard units based on customer reliability requirements. These processes rely heavily on the experience of design engineers who evaluate a limited number of alternatives to determine the number of redundant units, resulting in sub-optimal outcomes. In contrast, our proposed approach systematically handles the problem and yields optimal results. Our findings demonstrate that the proposed two-phase approach can achieve optimal redundancy levels within seconds.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43719902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}