In this short note, we show that the Diophantine equation 2 9 3 x y − = z has all non-negative integer solutions , , ∈ { , 2 , 0 : ∈ ℕ ∪ {0}} and the Diophantine equation 2 13 7 x y − = z have the unique non-negative integer solution ( , , ) (0,0,0) x y z = .
在这篇简短的笔记中,我们证明了Diophantine方程2,3 x y−= z有所有的非负整数解,,∈{,2,0:∈∪{0}},并且Diophantine方程2,7 x y−= z有唯一的非负整数解(,,)(0,0,0)x y z =。
{"title":"A Short Note on Two Diophantine Equations 9 x – 3y = z2 and 13x – 7y = z2","authors":"S. Tadee","doi":"10.22457/jmi.v24a02215","DOIUrl":"https://doi.org/10.22457/jmi.v24a02215","url":null,"abstract":"In this short note, we show that the Diophantine equation 2 9 3 x y − = z has all non-negative integer solutions , , ∈ { , 2 , 0 : ∈ ℕ ∪ {0}} and the Diophantine equation 2 13 7 x y − = z have the unique non-negative integer solution ( , , ) (0,0,0) x y z = .","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83202780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We put forward the sum augmented index, multiplicative sum augmented index of a graph. We determine the sum augmented index and the multiplicative sum augmented index for polycyclic aromatic hydrocarbons and jagged rectangle benzenoid systems.
提出了图的和增广指标、乘和增广指标。我们确定了多环芳烃和锯齿矩形苯系的和增广指数和乘和增广指数。
{"title":"Sum Augmented and Multiplicative Sum Augmented Indices of Some Nanostructures","authors":"V. Kulli","doi":"10.22457/jmi.v24a03219","DOIUrl":"https://doi.org/10.22457/jmi.v24a03219","url":null,"abstract":"We put forward the sum augmented index, multiplicative sum augmented index of a graph. We determine the sum augmented index and the multiplicative sum augmented index for polycyclic aromatic hydrocarbons and jagged rectangle benzenoid systems.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"829 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74443964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In clinical trials, age is often converted to binary data by the cutoff value. However, when looking at a scatter plot for a group of patients whose age is larger than or equal to the cutoff value, age and outcome may not be related. If the group whose age is greater than or equal to the cutoff value is further divided into two groups, the older of the two groups may appear to be at lower risk. In this case, it may be necessary to further divide the group of patients whose age is greater than or equal to the cutoff value into two groups. This study provides a method for determining which of the two or three groups is the best split. The following two methods are used to divide the data. The existing method, the Wilcoxon-Mann-Whitney test by minimum P-value approach, divides data into two groups by one cutoff value. A new method, the Kruskal-Wallis test by minimum P-value approach, divides data into three groups by two cutoff values. Of the two tests, the one with the smaller P-value is used. Because this was a new decision procedure, it was tested using Monte Carlo simulations (MCSs) before application to the available COVID-19 data. The MCS results showed that this method performs well. In the COVID-19 data, it was optimal to divide into three groups by two cutoff values of 60 and 70 years old. By looking at COVID-19 data separated into three groups according to the two cutoff values, it was confirmed that each group had different features. We provided the R code that can be used to replicate the results of this manuscript. Another practical example can be performed by replacing x and y with appropriate ones.
{"title":"Trichotomization with two cutoff values using Kruskal-Wallis test by minimum P-value approach","authors":"T. Ogura, C. Shiraishi","doi":"10.2478/jamsi-2022-0010","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0010","url":null,"abstract":"Abstract In clinical trials, age is often converted to binary data by the cutoff value. However, when looking at a scatter plot for a group of patients whose age is larger than or equal to the cutoff value, age and outcome may not be related. If the group whose age is greater than or equal to the cutoff value is further divided into two groups, the older of the two groups may appear to be at lower risk. In this case, it may be necessary to further divide the group of patients whose age is greater than or equal to the cutoff value into two groups. This study provides a method for determining which of the two or three groups is the best split. The following two methods are used to divide the data. The existing method, the Wilcoxon-Mann-Whitney test by minimum P-value approach, divides data into two groups by one cutoff value. A new method, the Kruskal-Wallis test by minimum P-value approach, divides data into three groups by two cutoff values. Of the two tests, the one with the smaller P-value is used. Because this was a new decision procedure, it was tested using Monte Carlo simulations (MCSs) before application to the available COVID-19 data. The MCS results showed that this method performs well. In the COVID-19 data, it was optimal to divide into three groups by two cutoff values of 60 and 70 years old. By looking at COVID-19 data separated into three groups according to the two cutoff values, it was confirmed that each group had different features. We provided the R code that can be used to replicate the results of this manuscript. Another practical example can be performed by replacing x and y with appropriate ones.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"19 - 32"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43936917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper is concerned with the upper bound of various coefficient functionals for a certain subclass of analytic functions associated with exponential function in the open unit disc E = {z ∈ℂ : |z| < 1}. This investigation will motivate other researchers to work in this direction.
{"title":"Coefficient inequalities for a subclass of analytic functions associated with exponential function","authors":"G. Singh, G. Singh","doi":"10.2478/jamsi-2022-0009","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0009","url":null,"abstract":"Abstract This paper is concerned with the upper bound of various coefficient functionals for a certain subclass of analytic functions associated with exponential function in the open unit disc E = {z ∈ℂ : |z| < 1}. This investigation will motivate other researchers to work in this direction.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"5 - 18"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46557602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We introduced Transmuted another Two-Parameter Sujatha Distribution by using Quadratic Rank Transmutation Map technique. Various necessary statistical properties of Transmuted another Two-Parameter Sujatha Distribution are obtained. The reliability measures of proposed model are also derived and model parameters are estimated by using maximum likelihood estimation method. The significance of transmuted parameter has been tested by using likelihood ratio statistic. Finally, an application to real data sets is presented to examine the significance of newly introduced model by computing Kolmogorov statistic, p-value, AIC, BIC, AICC, HQIC.
{"title":"A new generalized transmuted distribution","authors":"S. A. Wani, S. A. Dar","doi":"10.2478/jamsi-2022-0013","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0013","url":null,"abstract":"Abstract We introduced Transmuted another Two-Parameter Sujatha Distribution by using Quadratic Rank Transmutation Map technique. Various necessary statistical properties of Transmuted another Two-Parameter Sujatha Distribution are obtained. The reliability measures of proposed model are also derived and model parameters are estimated by using maximum likelihood estimation method. The significance of transmuted parameter has been tested by using likelihood ratio statistic. Finally, an application to real data sets is presented to examine the significance of newly introduced model by computing Kolmogorov statistic, p-value, AIC, BIC, AICC, HQIC.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"77 - 101"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49038687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper adopts different optimization algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization Algorithm (PSO-Algorithm) to train Back-Propagation (BP) neural networks, fits the Chinese, the Czech, Slovak, Hungarian, and Polish gross domestic product (GDP) growth model (from 1995 to 2020) and makes short-term simulation predictions. We use the PSO-Algorithm and GA with strong global search ability to optimize the weights and thresholds of the network, combine them with the BP neural network, and apply the resulting Particle Swarm Optimization Back-Propagation (PSO-BP) combined model or Genetic-Algorithm Back-Propagation (GA-BP) combined model to allow the network to achieve fast convergence. Besides, we also compare the above two hybrid models with standard multivariate regression model and BP neural network with different initialization methods like normal uniform and Xavier for fitting and short-term simulation predictions. Finally, we obtain the excellent results that all the above models have achieved a good fitting effect and PSO-BP combined model on the whole has a smaller error than others in predicting GDP values. Through the technology of PSO-BP and GA-BP, we have a clearer understanding of the five countries gross domestic product growth trends, which is conducive to the government to make reasonable decisions on the economic development.
{"title":"The application of PSO-BP combined model and GA-BP combined model in Chinese and V4’s economic growth model","authors":"X. Gui, Michal Feckan, J. Wang","doi":"10.2478/jamsi-2022-0011","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0011","url":null,"abstract":"Abstract This paper adopts different optimization algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization Algorithm (PSO-Algorithm) to train Back-Propagation (BP) neural networks, fits the Chinese, the Czech, Slovak, Hungarian, and Polish gross domestic product (GDP) growth model (from 1995 to 2020) and makes short-term simulation predictions. We use the PSO-Algorithm and GA with strong global search ability to optimize the weights and thresholds of the network, combine them with the BP neural network, and apply the resulting Particle Swarm Optimization Back-Propagation (PSO-BP) combined model or Genetic-Algorithm Back-Propagation (GA-BP) combined model to allow the network to achieve fast convergence. Besides, we also compare the above two hybrid models with standard multivariate regression model and BP neural network with different initialization methods like normal uniform and Xavier for fitting and short-term simulation predictions. Finally, we obtain the excellent results that all the above models have achieved a good fitting effect and PSO-BP combined model on the whole has a smaller error than others in predicting GDP values. Through the technology of PSO-BP and GA-BP, we have a clearer understanding of the five countries gross domestic product growth trends, which is conducive to the government to make reasonable decisions on the economic development.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"33 - 56"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45090450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract When implementing newly proposed methods on measurements taken from a human body in clinical trials, the researchers carefully consider whether the measurements have the maximum accuracy. Further, they verified the validity of the new method before being implemented in society. Method comparison evaluates the agreement between two continuous variables to determine whether those measurements agree on enough to interchange the methods. Special consideration of our work is a variation of the measurements with the magnitude of the measurement. We propose a method to evaluate the agreement of two methods when those are heteroscedastic using Bayesian inference since this method offers a more accurate, flexible, clear, and direct inference model using all available information. A simulation study was carried out to verify the characteristics and accuracy of the proposed model using different settings with different sample sizes. A gold particle dataset was analyzed to examine the practical viewpoint of the proposed model. This study shows that the coverage probabilities of all parameters are greater than 0.95. Moreover, all parameters have relatively low error values, and the simulation study implies the proposed model deals with the higher heteroscedasticity data with higher accuracy than others. In each setting, the model performs best when the sample size is 500.
{"title":"A heteroscedastic Bayesian model for method comparison data","authors":"S. Lakmali, Lakshika S. Nawarathna, P. Wijekoon","doi":"10.2478/jamsi-2022-0012","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0012","url":null,"abstract":"Abstract When implementing newly proposed methods on measurements taken from a human body in clinical trials, the researchers carefully consider whether the measurements have the maximum accuracy. Further, they verified the validity of the new method before being implemented in society. Method comparison evaluates the agreement between two continuous variables to determine whether those measurements agree on enough to interchange the methods. Special consideration of our work is a variation of the measurements with the magnitude of the measurement. We propose a method to evaluate the agreement of two methods when those are heteroscedastic using Bayesian inference since this method offers a more accurate, flexible, clear, and direct inference model using all available information. A simulation study was carried out to verify the characteristics and accuracy of the proposed model using different settings with different sample sizes. A gold particle dataset was analyzed to examine the practical viewpoint of the proposed model. This study shows that the coverage probabilities of all parameters are greater than 0.95. Moreover, all parameters have relatively low error values, and the simulation study implies the proposed model deals with the higher heteroscedasticity data with higher accuracy than others. In each setting, the model performs best when the sample size is 500.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"57 - 75"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43464371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. U. I. Rather, M. Jeelani, M. Shah, S. Rizvi, M. Sharma
Abstract In this study, the difficulty of estimating the population mean in the situation of post-stratification is discussed. The case of post-stratification is presented for ratio-type exponential estimators of finite population mean. Mean-squared error of the proposed estimator is obtained up to the first degree of approximation. In the instance of post-stratification, the proposed estimator was compared with the existing estimators. An empirical study by using some real data and further, simulation study has been carried out to demonstrate the performance of the proposed estimator.
{"title":"A new ratio type estimator for computation of population mean under post-stratification","authors":"K. U. I. Rather, M. Jeelani, M. Shah, S. Rizvi, M. Sharma","doi":"10.2478/jamsi-2022-0003","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0003","url":null,"abstract":"Abstract In this study, the difficulty of estimating the population mean in the situation of post-stratification is discussed. The case of post-stratification is presented for ratio-type exponential estimators of finite population mean. Mean-squared error of the proposed estimator is obtained up to the first degree of approximation. In the instance of post-stratification, the proposed estimator was compared with the existing estimators. An empirical study by using some real data and further, simulation study has been carried out to demonstrate the performance of the proposed estimator.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"29 - 42"},"PeriodicalIF":0.3,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48025610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Javanian, R. I. Nabiyyi, J. Toofanpour, M. Q. Vahidi-Asl
Abstract Protected nodes are neither leaves nor parents of any leaves in a rooted tree. We study here protected node profile, namely, the number of protected nodes with the same distance from the root in digital search trees, some fundamental data structures to store 0 - 1 strings. When each string is a sequence of independent and identically distributed Bernoulli(p) random variables with 0 < p < ( p≠12 p ne {1 over 2} ), Drmota and Szpankowski (2011) investigated the expectation of internal profile by the analytic methods. Here, we generalize the main parts of their approach in order to obtain the asymptotic expectations of protected node profile and non-protected node profile in digital search trees.
摘要受保护的节点既不是有根树中任何叶子的叶子,也不是任何叶子的父节点。我们在这里研究了受保护节点配置文件,即数字搜索树中与根具有相同距离的受保护节点的数量,一些存储0-1字符串的基本数据结构。当每个字符串是一个0<p<(p≠12 p ne{1over 2})的独立且同分布的伯努利(p)随机变量序列时,Drmota和Szpankowski(2011)用分析方法研究了内部轮廓的期望。在这里,我们推广了他们方法的主要部分,以获得数字搜索树中受保护节点轮廓和非受保护节点廓的渐近期望。
{"title":"Asymptotic expectation of protected node profile in random digital search trees","authors":"M. Javanian, R. I. Nabiyyi, J. Toofanpour, M. Q. Vahidi-Asl","doi":"10.2478/jamsi-2022-0004","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0004","url":null,"abstract":"Abstract Protected nodes are neither leaves nor parents of any leaves in a rooted tree. We study here protected node profile, namely, the number of protected nodes with the same distance from the root in digital search trees, some fundamental data structures to store 0 - 1 strings. When each string is a sequence of independent and identically distributed Bernoulli(p) random variables with 0 < p < ( p≠12 p ne {1 over 2} ), Drmota and Szpankowski (2011) investigated the expectation of internal profile by the analytic methods. Here, we generalize the main parts of their approach in order to obtain the asymptotic expectations of protected node profile and non-protected node profile in digital search trees.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"43 - 57"},"PeriodicalIF":0.3,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44476812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Principal Component Analysis (PCA) is the main method of dimension reduction and data processing when the dataset is of high dimension. Therefore, PCA is a widely used method in almost all scientific fields. Because PCA is a linear combination of the original variables, the interpretation process of the analysis results is often encountered with some difficulties. The approaches proposed for solving these problems are called to as Sparse Principal Component Analysis (SPCA). Sparse approaches are not robust in existence of outliers in the data set. In this study, the performance of the approach proposed by Croux et al. (2013), which combines the advantageous properties of SPCA and Robust Principal Component Analysis (RPCA), will be examined through one real and three artificial datasets in the situation of full sparseness. In the light of the findings, it is recommended to use robust sparse PCA based on projection pursuit in analyzing the data. Another important finding obtained from the study is that the BIC and TPO criteria used in determining lambda are not much superior to each other. We suggest choosing one of these two criteria that give an optimal result.
{"title":"Robust sparse principal component analysis: situation of full sparseness","authors":"B. Alkan, I. Ünaldi","doi":"10.2478/jamsi-2022-0001","DOIUrl":"https://doi.org/10.2478/jamsi-2022-0001","url":null,"abstract":"Abstract Principal Component Analysis (PCA) is the main method of dimension reduction and data processing when the dataset is of high dimension. Therefore, PCA is a widely used method in almost all scientific fields. Because PCA is a linear combination of the original variables, the interpretation process of the analysis results is often encountered with some difficulties. The approaches proposed for solving these problems are called to as Sparse Principal Component Analysis (SPCA). Sparse approaches are not robust in existence of outliers in the data set. In this study, the performance of the approach proposed by Croux et al. (2013), which combines the advantageous properties of SPCA and Robust Principal Component Analysis (RPCA), will be examined through one real and three artificial datasets in the situation of full sparseness. In the light of the findings, it is recommended to use robust sparse PCA based on projection pursuit in analyzing the data. Another important finding obtained from the study is that the BIC and TPO criteria used in determining lambda are not much superior to each other. We suggest choosing one of these two criteria that give an optimal result.","PeriodicalId":43016,"journal":{"name":"Journal of Applied Mathematics Statistics and Informatics","volume":"18 1","pages":"5 - 20"},"PeriodicalIF":0.3,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41958636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}