Pub Date : 2016-10-19DOI: 10.11648/J.SJAMS.20160406.13
Tianyao Fang, Liang Hu, Yun Xin
We investigate the option pricing problem when the price dynamics of the underlying risky assets are driven by delay geometric Brownian motions with regime switching. That is, the market interest rate, the appreciation rate and the volatility of the risky assets depend on the past stock prices and the unobservable states of the economy which are modulated by a continuous-time Markov chain. The market described by the model is incomplete, the martingale measure is not unique and the Esscher transform is employed to determine an equivalent martingale measure. We proved the model has a unique positive solution and the price of the contingent claims under the model can be computable numerically if not analytically.
{"title":"Option Pricing under Delay Geometric Brownian Motion with Regime Switching","authors":"Tianyao Fang, Liang Hu, Yun Xin","doi":"10.11648/J.SJAMS.20160406.13","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160406.13","url":null,"abstract":"We investigate the option pricing problem when the price dynamics of the underlying risky assets are driven by delay geometric Brownian motions with regime switching. That is, the market interest rate, the appreciation rate and the volatility of the risky assets depend on the past stock prices and the unobservable states of the economy which are modulated by a continuous-time Markov chain. The market described by the model is incomplete, the martingale measure is not unique and the Esscher transform is employed to determine an equivalent martingale measure. We proved the model has a unique positive solution and the price of the contingent claims under the model can be computable numerically if not analytically.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129261142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-17DOI: 10.11648/J.SJAMS.20160405.19
Yimeng Sui, Zhenyuan Wang
This paper is collecting the classic and newly normalization methods, finding deficiency of existing normalization methods for interval weights, and introducing a new normalization methods for interval weights. When we normalize the interval weights, it is very important and necessary to check whether, after normalizing, the location of interval centers as well as the length of interval weights keep the same proportion as those of original interval weights. It is found that, in some newly normalization methods, they violate these goodness criteria. In current work, for interval weights, we propose a new normalization method that reserves both proportions of the distances from interval centers to the origin and of interval lengths, and also eliminates the redundancy from the original given interval weights. This new method can be widely applied in information fusion and decision making in environments with uncertainty.
{"title":"Discussion on Normalization Methods of Interval Weights","authors":"Yimeng Sui, Zhenyuan Wang","doi":"10.11648/J.SJAMS.20160405.19","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.19","url":null,"abstract":"This paper is collecting the classic and newly normalization methods, finding deficiency of existing normalization methods for interval weights, and introducing a new normalization methods for interval weights. When we normalize the interval weights, it is very important and necessary to check whether, after normalizing, the location of interval centers as well as the length of interval weights keep the same proportion as those of original interval weights. It is found that, in some newly normalization methods, they violate these goodness criteria. In current work, for interval weights, we propose a new normalization method that reserves both proportions of the distances from interval centers to the origin and of interval lengths, and also eliminates the redundancy from the original given interval weights. This new method can be widely applied in information fusion and decision making in environments with uncertainty.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130572917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-15DOI: 10.11648/J.SJAMS.20160406.12
A. Muoka, Oscar Ngesa, A. Waititu
Statistical analyses involving count data may take several forms depending on the context of use, that is; simple counts such as the number of plants in a particular field and categorical data in which counts represent the number of items falling in each of the several categories. The mostly adapted model for analyzing count data is the Poisson model. Other models that can be considered for modeling count data are the negative binomial and the hurdle models. It is of great importance that these models are systematically considered and compared before choosing one at the expense of others to handle count data. In real world situations count data sets may have zero counts which have an importance attached to them. In this work, statistical simulation technique was used to compare the performance of these count data models. Count data sets with different proportions of zero were simulated. Akaike Information Criterion (AIC) was used in the simulation study to compare how well several count data models fit the simulated datasets. From the results of the study it was concluded that negative binomial model fits better to over-dispersed data which has below 0.3 proportion of zeros and that hurdle model performs better in data with 0.3 and above proportion of zero.
根据使用情况,涉及计数数据的统计分析可能采取几种形式,即;简单的计数,如特定领域的植物数量和分类数据,其中计数表示属于几个类别中的每个类别的项目数量。最适合分析计数数据的模型是泊松模型。其他可以考虑用于计数数据建模的模型是负二项模型和障碍模型。在选择一个模型来处理计数数据之前,系统地考虑和比较这些模型是非常重要的。在现实世界中,计数数据集可能具有零计数,这些计数具有重要意义。在这项工作中,统计模拟技术被用来比较这些计数数据模型的性能。模拟了不同比例零的计数数据集。在模拟研究中使用赤池信息准则(Akaike Information Criterion, AIC)来比较几种计数数据模型与模拟数据集的拟合程度。从研究结果可以看出,负二项模型对零比例小于0.3的过分散数据拟合效果较好,障碍模型对零比例大于0.3的过分散数据拟合效果较好。
{"title":"Statistical Models for Count Data","authors":"A. Muoka, Oscar Ngesa, A. Waititu","doi":"10.11648/J.SJAMS.20160406.12","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160406.12","url":null,"abstract":"Statistical analyses involving count data may take several forms depending on the context of use, that is; simple counts such as the number of plants in a particular field and categorical data in which counts represent the number of items falling in each of the several categories. The mostly adapted model for analyzing count data is the Poisson model. Other models that can be considered for modeling count data are the negative binomial and the hurdle models. It is of great importance that these models are systematically considered and compared before choosing one at the expense of others to handle count data. In real world situations count data sets may have zero counts which have an importance attached to them. In this work, statistical simulation technique was used to compare the performance of these count data models. Count data sets with different proportions of zero were simulated. Akaike Information Criterion (AIC) was used in the simulation study to compare how well several count data models fit the simulated datasets. From the results of the study it was concluded that negative binomial model fits better to over-dispersed data which has below 0.3 proportion of zeros and that hurdle model performs better in data with 0.3 and above proportion of zero.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126034679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-14DOI: 10.11648/J.SJAMS.20160406.11
Charles Kithenge Chege, J. Mung'atu, Oscar Ngesa
In the last decade, world financial markets, including the Kenyan market have been characterized by significant instabilities. This has resulted to criticism on available risk management systems and motivated research on better methods capable of identifying rare events that have resulted in heavy consequences. With the high volatility of the Kenyan Shilling/Us dollar exchange rates, it is important to come up with a more reliable method of evaluating the financial risk associated with such financial data. In the recent past, extensive research has been carried out to analyze extreme variations that financial markets are subject to, mostly because of currency crises, stock market crashes and large credit defaults. We considered the behavior of the tails of financial series. More specially was focus on the use of extreme value theory to assess tail-related risk; we thus aim at providing a modeling tool for modern risk management. Extreme Value Theory provides a theoretical foundation on which we can build statistical models describing extreme events. This will help in predictability of such future rare events.
{"title":"Estimating the Extreme Financial Risk of the Kenyan Shilling Versus Us Dollar Exchange Rates","authors":"Charles Kithenge Chege, J. Mung'atu, Oscar Ngesa","doi":"10.11648/J.SJAMS.20160406.11","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160406.11","url":null,"abstract":"In the last decade, world financial markets, including the Kenyan market have been characterized by significant instabilities. This has resulted to criticism on available risk management systems and motivated research on better methods capable of identifying rare events that have resulted in heavy consequences. With the high volatility of the Kenyan Shilling/Us dollar exchange rates, it is important to come up with a more reliable method of evaluating the financial risk associated with such financial data. In the recent past, extensive research has been carried out to analyze extreme variations that financial markets are subject to, mostly because of currency crises, stock market crashes and large credit defaults. We considered the behavior of the tails of financial series. More specially was focus on the use of extreme value theory to assess tail-related risk; we thus aim at providing a modeling tool for modern risk management. Extreme Value Theory provides a theoretical foundation on which we can build statistical models describing extreme events. This will help in predictability of such future rare events.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129041759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.11648/J.SJAMS.20160405.18
Kinyua Mark Njega, J. Mung'atu
Quantile regression provides a method of estimating quantiles from a conditional distribution density. It is achieves this by minimizing asymmetrically weighted sum of absolute errors thus partitioning the conditional distribution into quantiles. Lower conditional quantiles are of interest in estimation of Value-at-Risk because they indicate downward movement of financial returns. Current risk measurement methods do not effectively estimate the VaR since they make assumptions in the distribution tails. Financial data is sampled frequently leading to a heavier tailed distribution compared to a normal and student t distribution. A remedy to this is to use a method that does not make assumptions in the tail distribution of financial returns. Little research has been done on the usage of quantile regression in the estimation of portfolio risk in the Nairobi Securities Exchange. The main aim of this study was to model the portfolio risk as a lower conditional quantile, compare the performance of this model to the existing risk measurement methods and to predict the Value-at-Risk. This study presents summary of key findings and conclusion drawn from the study. From the fitted conditional quantile GARCH model 62.4% of VaR can be explained by past standard deviation and absolute residual of NSE 20 share index optimal portfolio returns. The fitted model had less proportion of failure of 7.65% compared to commonly used VaR models.
{"title":"Quantile Regression Model for Measurement of Equity Portfolio Risk a Case Study of Nairobi Securities Exchange","authors":"Kinyua Mark Njega, J. Mung'atu","doi":"10.11648/J.SJAMS.20160405.18","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.18","url":null,"abstract":"Quantile regression provides a method of estimating quantiles from a conditional distribution density. It is achieves this by minimizing asymmetrically weighted sum of absolute errors thus partitioning the conditional distribution into quantiles. Lower conditional quantiles are of interest in estimation of Value-at-Risk because they indicate downward movement of financial returns. Current risk measurement methods do not effectively estimate the VaR since they make assumptions in the distribution tails. Financial data is sampled frequently leading to a heavier tailed distribution compared to a normal and student t distribution. A remedy to this is to use a method that does not make assumptions in the tail distribution of financial returns. Little research has been done on the usage of quantile regression in the estimation of portfolio risk in the Nairobi Securities Exchange. The main aim of this study was to model the portfolio risk as a lower conditional quantile, compare the performance of this model to the existing risk measurement methods and to predict the Value-at-Risk. This study presents summary of key findings and conclusion drawn from the study. From the fitted conditional quantile GARCH model 62.4% of VaR can be explained by past standard deviation and absolute residual of NSE 20 share index optimal portfolio returns. The fitted model had less proportion of failure of 7.65% compared to commonly used VaR models.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114423732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-08DOI: 10.11648/J.SJAMS.20160405.17
Guobing Fan
The aim of this paper is to study the empirical Bayes test for the parameter of inverse exponential distribution. First, the Bayes test rule of one-sided test is derived in the case of independent and identically distributed random variables under weighted linear loss function. Then the empirical Bayes one-sided test rule is constructed by using the kernel-type density function and empirical distribution function. Finally, the asymptotically optimal property of the test function is obtained. It is shown that the convergence rates of the proposed empirical Bayes test rules can arbitrarily close to O(n-1/2) under suitable conditions.
{"title":"Empirical Bayes Test for Parameter of Inverse Exponential Distribution","authors":"Guobing Fan","doi":"10.11648/J.SJAMS.20160405.17","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.17","url":null,"abstract":"The aim of this paper is to study the empirical Bayes test for the parameter of inverse exponential distribution. First, the Bayes test rule of one-sided test is derived in the case of independent and identically distributed random variables under weighted linear loss function. Then the empirical Bayes one-sided test rule is constructed by using the kernel-type density function and empirical distribution function. Finally, the asymptotically optimal property of the test function is obtained. It is shown that the convergence rates of the proposed empirical Bayes test rules can arbitrarily close to O(n-1/2) under suitable conditions.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123102535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-08DOI: 10.11648/J.SJAMS.20160405.16
Lanping Li
The aim of this article is to study the estimation of the parameter of ЭРланга distribution based on complete samples. The Bayes estimators of the parameter of ЭРланга distribution are obtained under three different loss functions, namely, weighted square error loss, squared log error loss and entropy loss functions by using conjugate prior inverse Gamma distribution. Then the minimax estimators of the parameter are derived by using Lehmann’s theorem. Finally, performances of these estimators are compared in terms of risks which obtained under squared error loss function.
{"title":"Minimax Estimation of the Parameter of ЭРланга Distribution Under Different Loss Functions","authors":"Lanping Li","doi":"10.11648/J.SJAMS.20160405.16","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.16","url":null,"abstract":"The aim of this article is to study the estimation of the parameter of ЭРланга distribution based on complete samples. The Bayes estimators of the parameter of ЭРланга distribution are obtained under three different loss functions, namely, weighted square error loss, squared log error loss and entropy loss functions by using conjugate prior inverse Gamma distribution. Then the minimax estimators of the parameter are derived by using Lehmann’s theorem. Finally, performances of these estimators are compared in terms of risks which obtained under squared error loss function.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132734625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-28DOI: 10.11648/J.SJAMS.20160405.15
E. Okon, Okpara Virtue Ihuoma
In this paper, the extended LINMAP model developed in (Effanga and Okpara, 2015) is applied to make optimal decision on location of VIP fast food restaurant in Akwa State of Nigeria. The management of the VIP fast food restaurant considered extending their services to five towns (Uyo, Eket, Ikot Ekpene, Oron and Ikot Abasi) in Akwa Ibom State. The attributes considered in the evaluation of the locations are Population, Number of retail outlets, Average family income, Start-up cost, and Taxes. The solution of our model identifies Eket as the best town to operate the business followed by Ikot Abasi, Uyo, Oron and Ikot Ekpene.
{"title":"On Application of E - Linmap Model for Optimal Decision Making on Location of VIP Fast Food Restaurant in Akwa Ibom State, Nigeria","authors":"E. Okon, Okpara Virtue Ihuoma","doi":"10.11648/J.SJAMS.20160405.15","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.15","url":null,"abstract":"In this paper, the extended LINMAP model developed in (Effanga and Okpara, 2015) is applied to make optimal decision on location of VIP fast food restaurant in Akwa State of Nigeria. The management of the VIP fast food restaurant considered extending their services to five towns (Uyo, Eket, Ikot Ekpene, Oron and Ikot Abasi) in Akwa Ibom State. The attributes considered in the evaluation of the locations are Population, Number of retail outlets, Average family income, Start-up cost, and Taxes. The solution of our model identifies Eket as the best town to operate the business followed by Ikot Abasi, Uyo, Oron and Ikot Ekpene.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130743743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-24DOI: 10.11648/J.SJAMS.20160405.14
Yuhong Liu, Hui Li, Qishui Zhong, S. Zhong
This study investigates the problem of robust control for a class of discrete-time singular Marovian jump systems with partly unknown transition rates. Linear matrix inequality (LMI)-based sufficient conditions for the stochastic stability and robust control are developed. Then, a static output feedback controller and a robust static output feedback controller are designed to make sure the closed-loop systems are piecewise regular, causal and stochastically stable. Finally, numerical examples are presented to demonstrate the effectiveness and advantages of the theoretical results.
{"title":"Robust Control for Discrete-Time Singular Marovian Jump Systems with Partly Unknown Transition Rates","authors":"Yuhong Liu, Hui Li, Qishui Zhong, S. Zhong","doi":"10.11648/J.SJAMS.20160405.14","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.14","url":null,"abstract":"This study investigates the problem of robust control for a class of discrete-time singular Marovian jump systems with partly unknown transition rates. Linear matrix inequality (LMI)-based sufficient conditions for the stochastic stability and robust control are developed. Then, a static output feedback controller and a robust static output feedback controller are designed to make sure the closed-loop systems are piecewise regular, causal and stochastically stable. Finally, numerical examples are presented to demonstrate the effectiveness and advantages of the theoretical results.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115459334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-18DOI: 10.11648/J.SJAMS.20160405.13
M. Yeshitila, M. Taye
Leaf area (LA) is a valuable key for evaluating plant growth, therefore rapid, accurate, simple, and nondestructive methods for LA determination are important for physiological and agronomic studies. The objective of this study was to develop a model for leaf area prediction from simple non-destructive measurements in some most commonly cultivated vegetable crops’ accessions in the country. A field experiment was carried out from May to August of 2014 at ‘Hawassa College of Agriculture’s research site, using ten selected most commonly grown vegetable species of Potato (Solanum tuberosum. L), Cabbage (Brassica campestris L.), Pepper (Capsicum annuum L.), Beetroot (Beta vulgaris), Swisschard (Beta vulgaris), Sweet potato (Ipomoea batatas L.), Snapbean (Vicia Snap L.) and Onion (Allium cepa). A standard method (LICOR LI-3000C) was also used for measuring the actual areas of the leaves. All equations produced for leaf area were derived as affected by leaf length and leaf width. As a result of ANOVA and multiple-regression analysis, it was found that there was close relationship between actual and predicted growth parameters. The produced leaf area prediction models in the present study are: AREA (cm2) = -16.882+2.533L (cm) + 4.5076W (cm) for Pepper Melka Awaze Variety. AREA (cm2) = -18.943+2.225L (cm) + 5.710W (cm) for Pepper Melka Zale Variety. AREA (cm2) = 136.8524 + 2.68L (cm) + 2.564W (cm) for Sweet-potato. AREA (cm2) = -193.518 + 8.633L (cm) + 14.018W (cm) for Beetroot. AREA (cm2) = -23.1534 + 1.1023L (cm) + 16.156W (cm) for Onion. AREA (cm2) = -260.265 + 27.115 (L (cm) * W (cm)) for Cabbage. AREA (cm2) = -422.973 + 22.752L (cm) + 8.31W (cm) for Swisschard. AREA (cm2) = 68.85 – 13.47L (cm) + 7.34W + 0.645L2 (cm) -0.012W2 (cm) for Snapbean. R2 values (0.989, 0.976, 0.917, 0.926, 0.924, 0.966, 0.917, and 0.966 for the pepper Melka Awaze Variety, Pepper Melka Zale Variety, Sweetpotato, Beetroot, Onion, Cabbage, Swisschard and Snapbean respectively) and standard errors for all subsets of the independent variables were found to be significant at the p<0.001 level.
{"title":"Non-destructive Prediction Models for Estimation of Leaf Area for Most Commonly Grown Vegetable Crops in Ethiopia","authors":"M. Yeshitila, M. Taye","doi":"10.11648/J.SJAMS.20160405.13","DOIUrl":"https://doi.org/10.11648/J.SJAMS.20160405.13","url":null,"abstract":"Leaf area (LA) is a valuable key for evaluating plant growth, therefore rapid, accurate, simple, and nondestructive methods for LA determination are important for physiological and agronomic studies. The objective of this study was to develop a model for leaf area prediction from simple non-destructive measurements in some most commonly cultivated vegetable crops’ accessions in the country. A field experiment was carried out from May to August of 2014 at ‘Hawassa College of Agriculture’s research site, using ten selected most commonly grown vegetable species of Potato (Solanum tuberosum. L), Cabbage (Brassica campestris L.), Pepper (Capsicum annuum L.), Beetroot (Beta vulgaris), Swisschard (Beta vulgaris), Sweet potato (Ipomoea batatas L.), Snapbean (Vicia Snap L.) and Onion (Allium cepa). A standard method (LICOR LI-3000C) was also used for measuring the actual areas of the leaves. All equations produced for leaf area were derived as affected by leaf length and leaf width. As a result of ANOVA and multiple-regression analysis, it was found that there was close relationship between actual and predicted growth parameters. The produced leaf area prediction models in the present study are: AREA (cm2) = -16.882+2.533L (cm) + 4.5076W (cm) for Pepper Melka Awaze Variety. AREA (cm2) = -18.943+2.225L (cm) + 5.710W (cm) for Pepper Melka Zale Variety. AREA (cm2) = 136.8524 + 2.68L (cm) + 2.564W (cm) for Sweet-potato. AREA (cm2) = -193.518 + 8.633L (cm) + 14.018W (cm) for Beetroot. AREA (cm2) = -23.1534 + 1.1023L (cm) + 16.156W (cm) for Onion. AREA (cm2) = -260.265 + 27.115 (L (cm) * W (cm)) for Cabbage. AREA (cm2) = -422.973 + 22.752L (cm) + 8.31W (cm) for Swisschard. AREA (cm2) = 68.85 – 13.47L (cm) + 7.34W + 0.645L2 (cm) -0.012W2 (cm) for Snapbean. R2 values (0.989, 0.976, 0.917, 0.926, 0.924, 0.966, 0.917, and 0.966 for the pepper Melka Awaze Variety, Pepper Melka Zale Variety, Sweetpotato, Beetroot, Onion, Cabbage, Swisschard and Snapbean respectively) and standard errors for all subsets of the independent variables were found to be significant at the p<0.001 level.","PeriodicalId":422938,"journal":{"name":"Science Journal of Applied Mathematics and Statistics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115275341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}