In this study, a new loss distribution, called the exponentiated Fréchet loss distribution is developed and studied. The plots of the density function of the distribution show that the distribution can exhibit different shapes including right skewed and decreasing shapes, and various degrees of kurtosis. Several properties of the distribution are obtained including moments, mean excess function, limited expected value function, value at risk, tail value at risk, and tail variance. The estimators of the parameters of the distribution are obtained via maximum likelihood, maximum product spacing, ordinary least squares, and weighted least squares methods. The performances of the various estimators are investigated using simulation studies. The results show that the estimators are consistent. The new distribution is extended into a regression model. The usefulness and applicability of the new distribution and its regression model are demonstrated using actuarial data sets. The results show that the new loss distribution can be used as an alternative to modelling actuarial data.
{"title":"Actuarial Measures, Regression, and Applications of Exponentiated Fréchet Loss Distribution","authors":"A. Abubakari","doi":"10.1155/2022/3155188","DOIUrl":"https://doi.org/10.1155/2022/3155188","url":null,"abstract":"In this study, a new loss distribution, called the exponentiated Fréchet loss distribution is developed and studied. The plots of the density function of the distribution show that the distribution can exhibit different shapes including right skewed and decreasing shapes, and various degrees of kurtosis. Several properties of the distribution are obtained including moments, mean excess function, limited expected value function, value at risk, tail value at risk, and tail variance. The estimators of the parameters of the distribution are obtained via maximum likelihood, maximum product spacing, ordinary least squares, and weighted least squares methods. The performances of the various estimators are investigated using simulation studies. The results show that the estimators are consistent. The new distribution is extended into a regression model. The usefulness and applicability of the new distribution and its regression model are demonstrated using actuarial data sets. The results show that the new loss distribution can be used as an alternative to modelling actuarial data.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134407943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dolemweogo Sibiri Narcisse, Béré Frédéric, Nitiéma S. Pierre Clovis
In this work, we used Tran Hung Thao’s approximation of fractional Brownian motion to approximate the shadow price of the fractional Black Scholes model. In the case to maximize expectation of the utility function in a portfolio optimization problem under transaction cost, the shadow price is approximated by a Markovian process and semimartingale.
{"title":"Shadow Price Approximation for the Fractional Black Scholes Model","authors":"Dolemweogo Sibiri Narcisse, Béré Frédéric, Nitiéma S. Pierre Clovis","doi":"10.1155/2022/4719482","DOIUrl":"https://doi.org/10.1155/2022/4719482","url":null,"abstract":"In this work, we used Tran Hung Thao’s approximation of fractional Brownian motion to approximate the shadow price of the fractional Black Scholes model. In the case to maximize expectation of the utility function in a portfolio optimization problem under transaction cost, the shadow price is approximated by a Markovian process and semimartingale.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130239292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research aims to compare estimating the confidence intervals of variance based on the normal distribution with the primary method and the Bayesian approach. The maximum likelihood is the well-known method to approximate variance, and the Chi-squared distribution performs the confidence interval. The central Bayesian approach forms the posterior distribution that makes the variance estimator, which depends on the probability and prior distributions. Most introductory prior information looks for the availability of the prior distribution, informative prior distribution, and noninformative prior distribution. The gamma, Chi-squared, and exponential distributions are defined in the prior distribution. The informative prior distribution uses the Markov Chain Monte Carlo (MCMC) method to draw the random sample from the posterior distribution. The Fisher information performs the Wald confidence interval as the noninformative prior distribution. The interval estimation of the Bayesian approach is obtained from the central limit theorem. The performance of these methods considers the coverage probability and minimum value of the average width. The Monte Carlo process simulates the data from a normal distribution with the true parameter of mean and several variances and the sample sizes. The R program generates the simulated data repeated 10,000 times in each situation. The results showed that the maximum likelihood method employed on the small sample sizes. The best confidence interval estimation was when sample sizes increased the Bayesian approach with an available prior distribution. Overall, the Wald confidence interval tended to outperform the large sample sizes. For application in real data, we expressed the reported airborne particulate matter of 2.5 in Bangkok, Thailand. We used the 10–1000 records to estimate the confidence interval of variance and evaluated the interval width. The results are similar to those of the simulation study.
{"title":"Bayesian Approach for Confidence Intervals of Variance on the Normal Distribution","authors":"Autcha Araveeporn","doi":"10.1155/2022/8043260","DOIUrl":"https://doi.org/10.1155/2022/8043260","url":null,"abstract":"This research aims to compare estimating the confidence intervals of variance based on the normal distribution with the primary method and the Bayesian approach. The maximum likelihood is the well-known method to approximate variance, and the Chi-squared distribution performs the confidence interval. The central Bayesian approach forms the posterior distribution that makes the variance estimator, which depends on the probability and prior distributions. Most introductory prior information looks for the availability of the prior distribution, informative prior distribution, and noninformative prior distribution. The gamma, Chi-squared, and exponential distributions are defined in the prior distribution. The informative prior distribution uses the Markov Chain Monte Carlo (MCMC) method to draw the random sample from the posterior distribution. The Fisher information performs the Wald confidence interval as the noninformative prior distribution. The interval estimation of the Bayesian approach is obtained from the central limit theorem. The performance of these methods considers the coverage probability and minimum value of the average width. The Monte Carlo process simulates the data from a normal distribution with the true parameter of mean and several variances and the sample sizes. The R program generates the simulated data repeated 10,000 times in each situation. The results showed that the maximum likelihood method employed on the small sample sizes. The best confidence interval estimation was when sample sizes increased the Bayesian approach with an available prior distribution. Overall, the Wald confidence interval tended to outperform the large sample sizes. For application in real data, we expressed the reported airborne particulate matter of 2.5 in Bangkok, Thailand. We used the 10–1000 records to estimate the confidence interval of variance and evaluated the interval width. The results are similar to those of the simulation study.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the method comparison approach, two measurement errors are observed. The classical regression approach (linear regression) method cannot be used for the analysis because the method may yield biased and inefficient estimates. In view of that, the Deming regression is preferred over the classical regression. The focus of this work is to assess the impact of censored data on the traditional regression, which deletes the censored observations compared to an adapted version of the Deming regression that takes into account the censored data. The study was done based on simulation studies with NLMIXED being used as a tool to analyse the data. Eight different simulation studies were run in this study. Each of the simulation is made up of 100 datasets with 300 observations. Simulation studies suggest that the traditional Deming regression which deletes censored observations gives biased estimates and a low coverage, whereas the adapted Deming regression that takes censoring into account gives estimates that are close to the true value making them unbiased and gives a high coverage. When the analytical error ratio is misspecified, the estimates are as well not reliable and biased.
{"title":"Impact of Using Double Positive Samples in Deming Regression","authors":"S. Adarkwa, F. Owusu, S. Okyere","doi":"10.1155/2022/3984857","DOIUrl":"https://doi.org/10.1155/2022/3984857","url":null,"abstract":"In the method comparison approach, two measurement errors are observed. The classical regression approach (linear regression) method cannot be used for the analysis because the method may yield biased and inefficient estimates. In view of that, the Deming regression is preferred over the classical regression. The focus of this work is to assess the impact of censored data on the traditional regression, which deletes the censored observations compared to an adapted version of the Deming regression that takes into account the censored data. The study was done based on simulation studies with NLMIXED being used as a tool to analyse the data. Eight different simulation studies were run in this study. Each of the simulation is made up of 100 datasets with 300 observations. Simulation studies suggest that the traditional Deming regression which deletes censored observations gives biased estimates and a low coverage, whereas the adapted Deming regression that takes censoring into account gives estimates that are close to the true value making them unbiased and gives a high coverage. When the analytical error ratio is misspecified, the estimates are as well not reliable and biased.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131225042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a fractional differential equation of order one-half, to model the evolution through time of the dynamics of accumulation and elimination of the contaminant in the human organism with a deficient immune system, during consecutive intakes of contaminated food. This process quantifies the exposure to toxins of subjects living with comorbidity (children not breastfed, the elderly, and pregnant women) to food-borne diseases. The Adomian Decomposition Method and the fractional integration of Riemann Liouville are used in the modeling processes.
{"title":"A Stochastic Approach to Modeling Food Pattern","authors":"Komla Elom Adedje, D. Barro","doi":"10.1155/2022/9011873","DOIUrl":"https://doi.org/10.1155/2022/9011873","url":null,"abstract":"In this paper, we propose a fractional differential equation of order one-half, to model the evolution through time of the dynamics of accumulation and elimination of the contaminant in the human organism with a deficient immune system, during consecutive intakes of contaminated food. This process quantifies the exposure to toxins of subjects living with comorbidity (children not breastfed, the elderly, and pregnant women) to food-borne diseases. The Adomian Decomposition Method and the fractional integration of Riemann Liouville are used in the modeling processes.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134356655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, two new distributions are developed by compounding Sine-Weibull and zero-truncated geometric distributions. The quantile and ordinary moments of the distributions are obtained. Plots of the hazard rate functions of the distributions show that the distributions exhibit nonmonotonic failure rates. Also, plots of the densities of the distributions show that they exhibit decreasing, skewed, and approximately symmetric shapes, among others. Mixture and nonmixture cure rate models based on these distributions are also developed. The estimators of the parameters of the cure rate models are shown to be consistent via simulation studies. Covariates are introduced into the cure rate models via the logit link function. Finally, the performance of the distributions and the cure rate and regression models is demonstrated using real datasets. The results show that the developed distributions can serve as alternatives to existing models for survival data analyses.
{"title":"Sine-Weibull Geometric Mixture and Nonmixture Cure Rate Models with Applications to Lifetime Data","authors":"I. Angbing, Suleman Nasiru, D. Jakperik","doi":"10.1155/2022/1798278","DOIUrl":"https://doi.org/10.1155/2022/1798278","url":null,"abstract":"In this study, two new distributions are developed by compounding Sine-Weibull and zero-truncated geometric distributions. The quantile and ordinary moments of the distributions are obtained. Plots of the hazard rate functions of the distributions show that the distributions exhibit nonmonotonic failure rates. Also, plots of the densities of the distributions show that they exhibit decreasing, skewed, and approximately symmetric shapes, among others. Mixture and nonmixture cure rate models based on these distributions are also developed. The estimators of the parameters of the cure rate models are shown to be consistent via simulation studies. Covariates are introduced into the cure rate models via the logit link function. Finally, the performance of the distributions and the cure rate and regression models is demonstrated using real datasets. The results show that the developed distributions can serve as alternatives to existing models for survival data analyses.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"60 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133329572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We aimed to study constant mean curvature foliations of noncompact Riemannian manifolds, satisfying some geometric constraints. As a byproduct, we answer a question by M. P. do Carmo (see Introduction) about the leaves of such foliations.
我们的目的是研究满足一些几何约束的非紧黎曼流形的常平均曲率叶化。作为副产品,我们回答了m.p. do Carmo关于这种叶子的叶子的问题(见引言)。
{"title":"A Note on Constant Mean Curvature Foliations of Noncompact Riemannian Manifolds","authors":"S. Ilias, Barbara Nelli, M. Soret","doi":"10.1155/2022/7350345","DOIUrl":"https://doi.org/10.1155/2022/7350345","url":null,"abstract":"We aimed to study constant mean curvature foliations of noncompact Riemannian manifolds, satisfying some geometric constraints. As a byproduct, we answer a question by M. P. do Carmo (see Introduction) about the leaves of such foliations.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126758874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we design and investigate a higher order ε -uniformly convergent method to solve singularly perturbed parabolic reaction-diffusion problems with a large time delay. We use the Crank–Nicolson method for the time derivative, while the spatial derivative is discretized using a nonstandard finite difference approach on a uniform mesh. Furthermore, to improve the order of convergence, we used the Richardson extrapolation technique. The designed scheme converges independent of the perturbation parameter ( ε -uniformly convergent) and also achieves fourth-order convergent in both time and spatial variables. Two model examples are considered to demonstrate the applicability of the suggested method. The proposed method produces better accuracy and a higher rate of convergence than some methods that appear in the literature.
{"title":"A Nonstandard Fitted Operator Method for Singularly Perturbed Parabolic Reaction-Diffusion Problems with a Large Time Delay","authors":"A. Tiruneh, G. A. Derese, D. Tefera","doi":"10.1155/2022/5625049","DOIUrl":"https://doi.org/10.1155/2022/5625049","url":null,"abstract":"In this paper, we design and investigate a higher order \u0000 \u0000 ε\u0000 \u0000 -uniformly convergent method to solve singularly perturbed parabolic reaction-diffusion problems with a large time delay. We use the Crank–Nicolson method for the time derivative, while the spatial derivative is discretized using a nonstandard finite difference approach on a uniform mesh. Furthermore, to improve the order of convergence, we used the Richardson extrapolation technique. The designed scheme converges independent of the perturbation parameter (\u0000 \u0000 ε\u0000 \u0000 -uniformly convergent) and also achieves fourth-order convergent in both time and spatial variables. Two model examples are considered to demonstrate the applicability of the suggested method. The proposed method produces better accuracy and a higher rate of convergence than some methods that appear in the literature.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121307068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The key purpose of this paper is to work on the boundedness of generalized Bessel–Riesz operators defined with doubling measures in Lebesgue spaces with different measures. Relating Bessel decaying the kernel of the operators is satisfying some elementary properties. Doubling measure, Young's inequality, and Minköwski’s inequality will be used in proofs of boundedness of integral operators. In addition, we also explore the relation between the parameters of the kernel and generalized integral operators and see the norm of these generalized operators which will also be bounded by the norm of their kernel with different measures.
{"title":"A Note about Young's Inequality with Different Measures","authors":"Saba Mehmood, Eridani Eridani, F. Fatmawati","doi":"10.1155/2022/4672957","DOIUrl":"https://doi.org/10.1155/2022/4672957","url":null,"abstract":"The key purpose of this paper is to work on the boundedness of generalized Bessel–Riesz operators defined with doubling measures in Lebesgue spaces with different measures. Relating Bessel decaying the kernel of the operators is satisfying some elementary properties. Doubling measure, Young's inequality, and Minköwski’s inequality will be used in proofs of boundedness of integral operators. In addition, we also explore the relation between the parameters of the kernel and generalized integral operators and see the norm of these generalized operators which will also be bounded by the norm of their kernel with different measures.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131930056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abha Singh, R. Khan, Sumit Kushwaha, Tahani Alshenqeeti
Biomathematics is an interdisciplinary subject consisting of mathematics and biology, which is widely applicable for the analysis of biological problems. In this paper, we provide a mathematical model of two-phase hepatic blood flow in a jaundice patient’s artery. The blood flow is thought to be a two-phased process. The clinical data of a jaundice patient (blood pressure and hemoglobin) is gathered. To begin, hemoglobin is transformed into hematocrit, and blood pressure is turned to a decline in blood pressure. For the examination of hepatic arteries in Newtonian and non-Newtonian movements, a mathematical model is constructed. The relationship between two-phase blood flow flux and blood pressure reduction in the hepatic artery is established. For various hematocrit levels, the blood pressure decrease is determined. The patient’s states are defined by the slope of the linear relationship between computed blood pressure decrease and hematocrit.
{"title":"Roll of Newtonian and Non-Newtonian Motion in Analysis of Two-Phase Hepatic Blood Flow in Artery during Jaundice","authors":"Abha Singh, R. Khan, Sumit Kushwaha, Tahani Alshenqeeti","doi":"10.1155/2022/7388096","DOIUrl":"https://doi.org/10.1155/2022/7388096","url":null,"abstract":"Biomathematics is an interdisciplinary subject consisting of mathematics and biology, which is widely applicable for the analysis of biological problems. In this paper, we provide a mathematical model of two-phase hepatic blood flow in a jaundice patient’s artery. The blood flow is thought to be a two-phased process. The clinical data of a jaundice patient (blood pressure and hemoglobin) is gathered. To begin, hemoglobin is transformed into hematocrit, and blood pressure is turned to a decline in blood pressure. For the examination of hepatic arteries in Newtonian and non-Newtonian movements, a mathematical model is constructed. The relationship between two-phase blood flow flux and blood pressure reduction in the hepatic artery is established. For various hematocrit levels, the blood pressure decrease is determined. The patient’s states are defined by the slope of the linear relationship between computed blood pressure decrease and hematocrit.","PeriodicalId":301406,"journal":{"name":"Int. J. Math. Math. Sci.","volume":"96 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114103233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}