Pub Date : 2023-09-16DOI: 10.9734/ajpas/2023/v24i4532
Rukia Mbaita Mbaji, Troon John Benedict, Okumu Otieno Kevin
Measures of variation are statistical measures which assist in describing the distribution of data set. These measures are either used separately or together to give a wide variety of ways of measuring variability of data. Researchers and mathematicians found out that these measures were not perfect, they violated the algebraic laws and they possessed some weakness that they could not ignore. As a result of these facts, a new measure of variation known as geometric measure of variation was formulated. The new measure of variation was able to overcome all the weaknesses of the already existing measures. It obeyed all the algebraic laws, allowed further algebraic manipulation and was not affected by outliers or skewed data sets. Researchers were also able to determine that geometric measure was more efficient than standard deviation and that its estimates were always smaller than those of standard deviation but they did not determine their main relationship and how the sample characteristics affect the minimum difference between geometric measure and standard deviation. The main aim of this study was to empirically determine the ratio factor between standard deviation and geometric measure and specifically how certain variable such as sample size, outliers and geometric measure affects the minimum difference between geometric measure and standard deviation. Data simulation was the concept that was used to achieve the studies objectives. The samples were simulated individually under four different types of distributions which were normal, Poisson, Chi-square and Bernoulli distribution. A Hierarchical linear regression model was fitted on the normal, skewed, binary and countable data sets and results were obtained. Based on the results obtained, there is always a positive significant ratio factor between the geometric measure and standard deviation in all types of data sets. The ratio factor was influenced by the existence of outliers and sample size. The existence of outliers increased the difference between the geometric measure and standard deviation in skewed and countable data sets while in binary it decreased the difference between the standard deviation and geometric measure. For normal and binary data sets, increase in sample size did not have any significant effect on the difference between geometric measure and standard deviation but for skewed and countable data sets the increase in sample size decreased the difference between geometric measure and standard deviation.
{"title":"Determinants of Estimate Difference between Geometric Measure and Standard Deviation","authors":"Rukia Mbaita Mbaji, Troon John Benedict, Okumu Otieno Kevin","doi":"10.9734/ajpas/2023/v24i4532","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i4532","url":null,"abstract":"Measures of variation are statistical measures which assist in describing the distribution of data set. These measures are either used separately or together to give a wide variety of ways of measuring variability of data. Researchers and mathematicians found out that these measures were not perfect, they violated the algebraic laws and they possessed some weakness that they could not ignore. As a result of these facts, a new measure of variation known as geometric measure of variation was formulated. The new measure of variation was able to overcome all the weaknesses of the already existing measures. It obeyed all the algebraic laws, allowed further algebraic manipulation and was not affected by outliers or skewed data sets. Researchers were also able to determine that geometric measure was more efficient than standard deviation and that its estimates were always smaller than those of standard deviation but they did not determine their main relationship and how the sample characteristics affect the minimum difference between geometric measure and standard deviation. The main aim of this study was to empirically determine the ratio factor between standard deviation and geometric measure and specifically how certain variable such as sample size, outliers and geometric measure affects the minimum difference between geometric measure and standard deviation. Data simulation was the concept that was used to achieve the studies objectives. The samples were simulated individually under four different types of distributions which were normal, Poisson, Chi-square and Bernoulli distribution. A Hierarchical linear regression model was fitted on the normal, skewed, binary and countable data sets and results were obtained. Based on the results obtained, there is always a positive significant ratio factor between the geometric measure and standard deviation in all types of data sets. The ratio factor was influenced by the existence of outliers and sample size. The existence of outliers increased the difference between the geometric measure and standard deviation in skewed and countable data sets while in binary it decreased the difference between the standard deviation and geometric measure. For normal and binary data sets, increase in sample size did not have any significant effect on the difference between geometric measure and standard deviation but for skewed and countable data sets the increase in sample size decreased the difference between geometric measure and standard deviation.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135308484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.9734/ajpas/2023/v24i4531
Rohit Kumar Verma
The invention of several significant non-Shannon entropy inequalities, the optimal Gaussing theorems, the relationship between the discrete memoryless source pair and the probability mass function, and these topics are all covered in the current communication. It may directly or indirectly prove to be significant for the literature of information theory.
{"title":"Obvious Disparities on Optimal Guesswork Wiretapper Moments under Mismatch Related to Non-Shannon Cypher System","authors":"Rohit Kumar Verma","doi":"10.9734/ajpas/2023/v24i4531","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i4531","url":null,"abstract":"The invention of several significant non-Shannon entropy inequalities, the optimal Gaussing theorems, the relationship between the discrete memoryless source pair and the probability mass function, and these topics are all covered in the current communication. It may directly or indirectly prove to be significant for the literature of information theory.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135437365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-13DOI: 10.9734/ajpas/2023/v24i4530
None Shehu A., None Dauran, N. S., None Usman, A. G.
Missing values or missing data occur in experiments as a result of several reasons, these reasons could be natural or it happened due to failure on the part of experimenter. When missing value occurred it causes biasness to the analysis and failure in the efficiency. The study considered the Sudoku square design of order where row-blocks and column-blocks are equal, rows and columns are equal with one missing value. The missing value is estimated by comparing the missing value in respect to Latin square of order and also in respect to randomized block design, the estimator for the missing value is derived and numerical illustration is given to show how the estimator is used to obtain the estimate of a missing value when and in a squared Sudoku design.
{"title":"Estimation of Missing Value in Sudoku Square Design","authors":"None Shehu A., None Dauran, N. S., None Usman, A. G.","doi":"10.9734/ajpas/2023/v24i4530","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i4530","url":null,"abstract":"Missing values or missing data occur in experiments as a result of several reasons, these reasons could be natural or it happened due to failure on the part of experimenter. When missing value occurred it causes biasness to the analysis and failure in the efficiency. The study considered the Sudoku square design of order where row-blocks and column-blocks are equal, rows and columns are equal with one missing value. The missing value is estimated by comparing the missing value in respect to Latin square of order and also in respect to randomized block design, the estimator for the missing value is derived and numerical illustration is given to show how the estimator is used to obtain the estimate of a missing value when and in a squared Sudoku design.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-08DOI: 10.9734/ajpas/2023/v24i4529
Miriam Sitienei, A. Otieno, A. Anapapa
Predictive analytics utilizes historical data and knowledge to predict future outcomes and provides a method for evaluating the accuracy and reliability of these forecasts. Artificial Intelligence is a tool of predictive analytics. AI trains computers to learn human behaviors like learning, judgment, and decision-making while simulating intelligent human behavior using computers and has received a lot of attention in almost all areas of research. Machine learning is a branch of Artificial Intelligence that has been used to solve classification and regression problems. Machine learning advancements have aided in boosting agricultural gains. Yield prediction is one of the agricultural sectors that has embraced machine learning. K Nearest Neighbor (KNN) Regression is a regression algorithm used in machine learning for prediction tasks. KNN Regression is like KNN Classification, except that KNN Regression predicts a constant output value for a given input instead of predicting a class label. The basic idea behind KNN Regression is to find the K nearest neighbors to a given input data point based on a distance metric and then use the average (or weighted average) of the output values of these K neighbors as the predicted output for the input data point. The distance metric used in KNN Regression can vary depending on the data type being analyzed, but common distance metrics include Euclidean distance, Manhattan distance, and Minkowski distance. This paper presents the application of KNN regression in maize yield prediction in Uasin Gishu county, in north rift region of Kenya. Questionnaires were distributed to 900 randomly selected maize farmers across the thirty wards to obtain primary data. With a Train Test split ration of 80:20, KNN regression algorithm was able to predict maize yield and its prediction performance was evaluated using Root Mean Squared error-RMSE=0.4948, Mean Squared Error-MSE =0.2803, Mean Absolute Error-MAE = 0.4591 and Mean Absolute Percentage Error-MAPE = 36.17. According to the study findings, the algorithm was able to predict maize yield in the maize producing county.
{"title":"An Application of K-Nearest-Neighbor Regression in Maize Yield Prediction","authors":"Miriam Sitienei, A. Otieno, A. Anapapa","doi":"10.9734/ajpas/2023/v24i4529","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i4529","url":null,"abstract":"Predictive analytics utilizes historical data and knowledge to predict future outcomes and provides a method for evaluating the accuracy and reliability of these forecasts. Artificial Intelligence is a tool of predictive analytics. AI trains computers to learn human behaviors like learning, judgment, and decision-making while simulating intelligent human behavior using computers and has received a lot of attention in almost all areas of research. Machine learning is a branch of Artificial Intelligence that has been used to solve classification and regression problems. Machine learning advancements have aided in boosting agricultural gains. Yield prediction is one of the agricultural sectors that has embraced machine learning. K Nearest Neighbor (KNN) Regression is a regression algorithm used in machine learning for prediction tasks. KNN Regression is like KNN Classification, except that KNN Regression predicts a constant output value for a given input instead of predicting a class label. The basic idea behind KNN Regression is to find the K nearest neighbors to a given input data point based on a distance metric and then use the average (or weighted average) of the output values of these K neighbors as the predicted output for the input data point. The distance metric used in KNN Regression can vary depending on the data type being analyzed, but common distance metrics include Euclidean distance, Manhattan distance, and Minkowski distance. This paper presents the application of KNN regression in maize yield prediction in Uasin Gishu county, in north rift region of Kenya. Questionnaires were distributed to 900 randomly selected maize farmers across the thirty wards to obtain primary data. With a Train Test split ration of 80:20, KNN regression algorithm was able to predict maize yield and its prediction performance was evaluated using Root Mean Squared error-RMSE=0.4948, Mean Squared Error-MSE =0.2803, Mean Absolute Error-MAE = 0.4591 and Mean Absolute Percentage Error-MAPE = 36.17. According to the study findings, the algorithm was able to predict maize yield in the maize producing county.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91091244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-05DOI: 10.9734/ajpas/2023/v24i3528
Ahmed Abdulkadir, Bannister Jerry Zachary, Nafisat Yusuf, Kabiru Musa
The study aimed at using the Close-Knit Regression (CKR) technique to approximate values absent because of the missing completely at random mechanism. Bivariate datasets were generated and simulated for MCAR mechanism at low (10%) and high (60%) rates. The CKR method was used and compared alongside other single imputation techniques like mean imputation, simple regression and K- Nearest Neighbors (K-NN). The difference between parameter estimates like mean, correlation coefficient (r), maximum, minimum and standard deviation which were gotten using predicted data and those using the original data as well as assessment of error rates like mean absolute error (MAE) and root mean square error (RMSE) were used as metrics in deciding the efficiency of the techniques. Results showed that the CKR technique was the best from those considered, with its estimated data having parameter estimates closer to that of the original whilst having the least error rates at 10% (MAE of 0.01 and RMSE of 0.047) and 60% (MAE of 0.021 and RMSE of 0.073) in comparison to other methods, CKR technique is a suitable single imputation technique which produces estimates close to the original data and parameters with low error rates when data are MCAR.
{"title":"Close-Knit-Regression: An Efficient Technique in Estimating Missing Completely at Random Data","authors":"Ahmed Abdulkadir, Bannister Jerry Zachary, Nafisat Yusuf, Kabiru Musa","doi":"10.9734/ajpas/2023/v24i3528","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i3528","url":null,"abstract":"The study aimed at using the Close-Knit Regression (CKR) technique to approximate values absent because of the missing completely at random mechanism. Bivariate datasets were generated and simulated for MCAR mechanism at low (10%) and high (60%) rates. The CKR method was used and compared alongside other single imputation techniques like mean imputation, simple regression and K- Nearest Neighbors (K-NN). The difference between parameter estimates like mean, correlation coefficient (r), maximum, minimum and standard deviation which were gotten using predicted data and those using the original data as well as assessment of error rates like mean absolute error (MAE) and root mean square error (RMSE) were used as metrics in deciding the efficiency of the techniques. Results showed that the CKR technique was the best from those considered, with its estimated data having parameter estimates closer to that of the original whilst having the least error rates at 10% (MAE of 0.01 and RMSE of 0.047) and 60% (MAE of 0.021 and RMSE of 0.073) in comparison to other methods, CKR technique is a suitable single imputation technique which produces estimates close to the original data and parameters with low error rates when data are MCAR.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81429881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-02DOI: 10.9734/ajpas/2023/v24i3526
D. I, A. I. U., Loko, O. P
The benefit of monetary assets cannot be over emphasized because it stands as an engine room to every investment which accumulates wealth such as daily, weekly, monthly and yearly etc. In this study, a closed form solution of Stochastic Differential Equation (SDE) was successfully exploited for the analysis of asset values and other stock market quantities. The solutions of stock variables were critically observed by simulations which describe the behavior of asset values with respect to their maturity periods. Finally, the skewness and kurtosis of the asset values were obtained to give investors proper directions in terms of decision making.
{"title":"A Two Model Approach of Assessing Asset Value Functions for Capital Investments","authors":"D. I, A. I. U., Loko, O. P","doi":"10.9734/ajpas/2023/v24i3526","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i3526","url":null,"abstract":"The benefit of monetary assets cannot be over emphasized because it stands as an engine room to every investment which accumulates wealth such as daily, weekly, monthly and yearly etc. In this study, a closed form solution of Stochastic Differential Equation (SDE) was successfully exploited for the analysis of asset values and other stock market quantities. The solutions of stock variables were critically observed by simulations which describe the behavior of asset values with respect to their maturity periods. Finally, the skewness and kurtosis of the asset values were obtained to give investors proper directions in terms of decision making.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"293 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77304398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-02DOI: 10.9734/ajpas/2023/v24i3525
Musyoki Michael, Alilah David, Angwenyi David
Kenya’s horticulture sector is one of the key contributors to the country’s national income. In this paper, we apply Box-Jenkins SARIMA time series modeling approach to develop a time series model that best describes the income to Kenya’s economy from the export of horticulture produce. In the process of analysis, we considered monthly data from August 1998 to March 2023. It was found that, SARIMA(3; 1; 4)(0; 1; 0)12 is the suitable model that describes the income from the export of Kenya’s horticulture produce.
{"title":"Time Series Modeling of Monetary Value from Kenya’s Horticultural Export Produce","authors":"Musyoki Michael, Alilah David, Angwenyi David","doi":"10.9734/ajpas/2023/v24i3525","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i3525","url":null,"abstract":"Kenya’s horticulture sector is one of the key contributors to the country’s national income. In this paper, we apply Box-Jenkins SARIMA time series modeling approach to develop a time series model that best describes the income to Kenya’s economy from the export of horticulture produce. In the process of analysis, we considered monthly data from August 1998 to March 2023. It was found that, SARIMA(3; 1; 4)(0; 1; 0)12 is the suitable model that describes the income from the export of Kenya’s horticulture produce.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87739723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main reason for this study is to know the performance of BFTSC (Break for Time Series Components) and GFTSC (Group for Time Series Components) in identification of time series components using volatile simulated and empirical data. BFTSC was created to capture the trend, seasonal, cyclical and irregular components and presented them in a time series plot. While GFTSC was designed to capture all the four time series components together with the equations that produces each components of time series. BFAST (Break for Additive, Seasonal and Trend) only identifies trend and seasonal components while considering all other left over components as random, identification of trend and seasonal components alone is not enough to have a clear image of all the time series components in a time series data. Performance through evaluation using low and high volatile simulated and empirical data was conducted to evaluate the performance of both techniques. For yearly sample size of 8, 16 and 24 years were for small medium and large sample size. For the monthly data, 48, 96 and 144 months were used as small, medium and large sample size. Each of the sample size was replicated 100 times each. Finally, GFTSC and BFTSC performance was very good for large sample size with linear trend for both monthly and yearly data (approximately 100%). While the performance drops with highly volatile data such as trend with curve trend line (such as quadratic and cubic). These findings indicate that BFTSC and GFTSC can provide a better alternative to manual technique and BFAST for data associated with linear trend, hence BFTSC and GFTSC are recommended for public.
本研究的主要原因是了解BFTSC (Break for Time Series Components)和GFTSC (Group for Time Series Components)在使用挥发性模拟数据和经验数据识别时间序列成分方面的性能。创建BFTSC是为了捕捉趋势、季节性、周期性和不规则成分,并将它们呈现在时间序列图中。而GFTSC的设计是为了捕捉所有的四个时间序列组成部分以及产生时间序列的每个组成部分的方程。BFAST (Break for Additive, Seasonal and Trend)只识别趋势和季节分量,而考虑到其他所有剩余分量都是随机的,仅识别趋势和季节分量不足以清晰地显示时间序列数据中的所有时间序列分量。通过使用低挥发性和高挥发性模拟数据和经验数据对两种技术的性能进行了评估。年样本量为8年,16年和24年为中小样本量和大样本量。月度数据采用小样本量、中样本量和大样本量分别为48、96和144个月。每个样本量都被重复了100次。最后,GFTSC和BFTSC在大样本量下表现良好,月度和年度数据均呈线性趋势(约100%)。而对于具有曲线趋势线(如二次曲线和三次曲线)的高波动性数据,性能下降。这些结果表明BFTSC和GFTSC可以更好地替代手工技术和BFAST,因此推荐BFTSC和GFTSC用于公众。
{"title":"Examining the Efficacy of Break for Time Series Components (BFTSC) and Group for Time Series Components (GFTSC) with Volatile Simulated and Empirical Data","authors":"Ajare Emmanuel Oloruntoba, Adefabi Adekunle, Adeyemo Abiodun","doi":"10.9734/ajpas/2023/v24i3527","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i3527","url":null,"abstract":"The main reason for this study is to know the performance of BFTSC (Break for Time Series Components) and GFTSC (Group for Time Series Components) in identification of time series components using volatile simulated and empirical data. BFTSC was created to capture the trend, seasonal, cyclical and irregular components and presented them in a time series plot. While GFTSC was designed to capture all the four time series components together with the equations that produces each components of time series. BFAST (Break for Additive, Seasonal and Trend) only identifies trend and seasonal components while considering all other left over components as random, identification of trend and seasonal components alone is not enough to have a clear image of all the time series components in a time series data. Performance through evaluation using low and high volatile simulated and empirical data was conducted to evaluate the performance of both techniques. For yearly sample size of 8, 16 and 24 years were for small medium and large sample size. For the monthly data, 48, 96 and 144 months were used as small, medium and large sample size. Each of the sample size was replicated 100 times each. Finally, GFTSC and BFTSC performance was very good for large sample size with linear trend for both monthly and yearly data (approximately 100%). While the performance drops with highly volatile data such as trend with curve trend line (such as quadratic and cubic). These findings indicate that BFTSC and GFTSC can provide a better alternative to manual technique and BFAST for data associated with linear trend, hence BFTSC and GFTSC are recommended for public.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75028616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.9734/ajpas/2023/v24i3524
Troon J. Benedict, Onyango Fredrick, Karanjah Anthony, Njunguna Edward
Construction of Balanced Incomplete Block Designs (BIBD) is a combination problem that involves the arrangement of (mathit{v}) treatments into b blocks each of size (mathit{k}) such that each treatment is replicated exactly (mathit{r}) times in the design and a pair of treatments occur together in (lambda) blocks. Several methods of constructing BIBDs exist. However, these methods still cannot be used to design all BIBDs. Therefore, several BIBDs are still unknown because a definite construction method for all BIBDs is still unknown. The study aimed to develop a new construction method that could aid in constructing more BIBDs. The study derived a new class of BIBD from un-reduced BIBD with parameters (mathit{v}) and (mathit{k}) such that (mathit{k} ge) 3 through selection of all blocks within the un-reduced BIBD that contains a particular treatment (mathit{i}) then in the selected blocks treatment delete treatment (mathit{i}) and retain all the other treatments. The resulting BIBD was Derived Reduced BIBD with parameters (v^*=v-1, b^*=left(begin{array}{c}v-1 k-1end{array}right), k^*=k-1, r^*=left(begin{array}{c}v-2 k-2end{array}right), lambda=left(begin{array}{c}v-3 k-3end{array}right)). In conclusion, the construction method was simple and could be used to construct several BIBDs, which could assist in solving the problem of BIBD, whose existence is still unknown.
{"title":"Derived Reduced Balanced Incomplete Block Design","authors":"Troon J. Benedict, Onyango Fredrick, Karanjah Anthony, Njunguna Edward","doi":"10.9734/ajpas/2023/v24i3524","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i3524","url":null,"abstract":"Construction of Balanced Incomplete Block Designs (BIBD) is a combination problem that involves the arrangement of (mathit{v}) treatments into b blocks each of size (mathit{k}) such that each treatment is replicated exactly (mathit{r}) times in the design and a pair of treatments occur together in (lambda) blocks. Several methods of constructing BIBDs exist. However, these methods still cannot be used to design all BIBDs. Therefore, several BIBDs are still unknown because a definite construction method for all BIBDs is still unknown. The study aimed to develop a new construction method that could aid in constructing more BIBDs. The study derived a new class of BIBD from un-reduced BIBD with parameters (mathit{v}) and (mathit{k}) such that (mathit{k} ge) 3 through selection of all blocks within the un-reduced BIBD that contains a particular treatment (mathit{i}) then in the selected blocks treatment delete treatment (mathit{i}) and retain all the other treatments. The resulting BIBD was Derived Reduced BIBD with parameters (v^*=v-1, b^*=left(begin{array}{c}v-1 k-1end{array}right), k^*=k-1, r^*=left(begin{array}{c}v-2 k-2end{array}right), lambda=left(begin{array}{c}v-3 k-3end{array}right)). In conclusion, the construction method was simple and could be used to construct several BIBDs, which could assist in solving the problem of BIBD, whose existence is still unknown.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78324888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.9734/ajpas/2023/v24i2523
Rajesh Singh, Rohan Mishra
Aims/ Objectives: Various efficient estimators using single and dual auxiliary variables with different functions including log and exponential have been developed in the SRSWOR design. Since the Adaptive cluster sampling (ACS) design is relatively new, estimators using functions like log and exponential with single and dual auxiliary variables have not been explored much. Therefore in this article, we propose two wider classes of estimators using single and dual auxiliary variables respectively so that the properties like bias and mean squared errors of various estimators using functions like log and exponential or any other function which belong to the proposed wider classes and have not been developed and studied yet would be known in advance. Formulae of the bias and mean squared error have been derived and presented. Further, since log type estimators have not been studied extensively in the ACS design we have developed new log type classes from each of the proposed wider classes and developed and studied some new log type member estimators. To examine the performance of these new developed log-type estimators over some competing estimators simulation studies have been conducted and all the estimators are further applied to a real data to estimate the average number of Mules in the Indian state of Assam. The studies show that the developed log-type estimators perform better.
{"title":"Wider Classes of Estimators in Adaptive Cluster Sampling","authors":"Rajesh Singh, Rohan Mishra","doi":"10.9734/ajpas/2023/v24i2523","DOIUrl":"https://doi.org/10.9734/ajpas/2023/v24i2523","url":null,"abstract":"Aims/ Objectives: Various efficient estimators using single and dual auxiliary variables with different functions including log and exponential have been developed in the SRSWOR design. Since the Adaptive cluster sampling (ACS) design is relatively new, estimators using functions like log and exponential with single and dual auxiliary variables have not been explored much. Therefore in this article, we propose two wider classes of estimators using single and dual auxiliary variables respectively so that the properties like bias and mean squared errors of various estimators using functions like log and exponential or any other function which belong to the proposed wider classes and have not been developed and studied yet would be known in advance. Formulae of the bias and mean squared error have been derived and presented. Further, since log type estimators have not been studied extensively in the ACS design we have developed new log type classes from each of the proposed wider classes and developed and studied some new log type member estimators. To examine the performance of these new developed log-type estimators over some competing estimators simulation studies have been conducted and all the estimators are further applied to a real data to estimate the average number of Mules in the Indian state of Assam. The studies show that the developed log-type estimators perform better.","PeriodicalId":8532,"journal":{"name":"Asian Journal of Probability and Statistics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76505851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}