Academic research and the financial industry have recently shown great interest in Machine Learning algorithms capable of solving complex learning tasks, although in the field of firms' default prediction the lack of interpretability has prevented an extensive adoption of the black-box type of models. In order to overcome this drawback and maintain the high performances of black-boxes, this paper has chosen a model-agnostic approach. Accumulated Local Effects and Shapley values are used to shape the predictors' impact on the likelihood of default and rank them according to their contribution to the model outcome. Prediction is achieved by two Machine Learning algorithms (eXtreme Gradient Boosting and FeedForward Neural Networks) compared with three standard discriminant models. Results show that our analysis of the Italian Small and Medium Enterprises manufacturing industry benefits from the overall highest classification power by the eXtreme Gradient Boosting algorithm still maintaining a rich interpretation framework to support decisions.
{"title":"Lost in a black-box? Interpretable machine learning for assessing Italian SMEs default","authors":"Lisa Crosato, Caterina Liberati, Marco Repetto","doi":"10.1002/asmb.2803","DOIUrl":"10.1002/asmb.2803","url":null,"abstract":"<p>Academic research and the financial industry have recently shown great interest in Machine Learning algorithms capable of solving complex learning tasks, although in the field of firms' default prediction the lack of interpretability has prevented an extensive adoption of the black-box type of models. In order to overcome this drawback and maintain the high performances of black-boxes, this paper has chosen a model-agnostic approach. Accumulated Local Effects and Shapley values are used to shape the predictors' impact on the likelihood of default and rank them according to their contribution to the model outcome. Prediction is achieved by two Machine Learning algorithms (eXtreme Gradient Boosting and FeedForward Neural Networks) compared with three standard discriminant models. Results show that our analysis of the Italian Small and Medium Enterprises manufacturing industry benefits from the overall highest classification power by the eXtreme Gradient Boosting algorithm still maintaining a rich interpretation framework to support decisions.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49183254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors should be congratulated on a very interesting and insightful discussion which motivates the use of Bayesian inference in reliability theory. The article motivates the use of Bayesian methods especially in the case of small number of failures. The following log-location-scale distributions are considered: the lognormal distribution and the Weibull distribution. The importance of reparameterization is discussed, where it is, for example, more useful to replace the scale parameter with a certain quantile. A very important advantage and practical reason for this is given as follows: “Elicitation of a prior distribution is facilitated because the parameter have practical interpretations and are familiar to practitioners.”
As stated in Irony and Singpurwalla,1 José Bernardo said the following: “Non-subjective Bayesian analysis is just a part,—an important part, I believe-, of a healthy sensitivity analysis to the prior choice: it provides an answer to a very important question in scientific communication, namely, what could one conclude from the data if prior beliefs were such that the posterior distribution of the quantity of interest were dominated by the data.”
It would be interesting to see how a divergence prior will compare with the priors discussed in this article. Ghosh et al.2 developed a prior where the distance between the prior and posterior is maximized by making use of the chi-square divergence, whereas a reference prior is the prior distribution that maximizes the Kullback–Leibler divergence between the prior and the posterior distribution. When other distances are used the Jeffreys prior is the result with adequate first order approximations but with the chi-square distance the second order approximations give this prior. Second order approximations is used since chi-square divergence approximations of the first order does not give priors. In other cases where other divergence measures are used, first order approximations gives priors that are adequate.
A distinction between weakly informative priors and noninformative priors is also given. A weakly informative prior as opposed to a noninformative prior is used when the prior influences the posterior mildly as opposed to having no influence on the posterior. The authors provide a very useful table of recommended prior distributions for log-location-scale distribution parameters, where it is clearly given and discussed what type of prior (informative, noninformative, or informative weakly informative) and prior distribution inputs are needed. A simulation study is done to investigate the coverage probability. When using complete data and Type 2 censored data from log-location-scale distribution, the independence Jeffreys prior have coverage rates that are the same as the nominal confidence level, and when using Type 2 and random censoring the independence Jeffreys prior have coverage rates that are close to
{"title":"Discussion of “Specifying prior distributions in reliability applications”","authors":"Lizanne Raubenheimer","doi":"10.1002/asmb.2806","DOIUrl":"10.1002/asmb.2806","url":null,"abstract":"<p>The authors should be congratulated on a very interesting and insightful discussion which motivates the use of Bayesian inference in reliability theory. The article motivates the use of Bayesian methods especially in the case of small number of failures. The following log-location-scale distributions are considered: the lognormal distribution and the Weibull distribution. The importance of reparameterization is discussed, where it is, for example, more useful to replace the scale parameter with a certain quantile. A very important advantage and practical reason for this is given as follows: “Elicitation of a prior distribution is facilitated because the parameter have practical interpretations and are familiar to practitioners.”</p><p>As stated in Irony and Singpurwalla,<span><sup>1</sup></span> José Bernardo said the following: “Non-subjective Bayesian analysis is just a part,—an important part, I believe-, of a healthy sensitivity analysis to the prior choice: it provides an answer to a very important question in scientific communication, namely, what could one conclude from the data if prior beliefs were such that the posterior distribution of the quantity of interest were dominated by the data.”</p><p>It would be interesting to see how a divergence prior will compare with the priors discussed in this article. Ghosh et al.<span><sup>2</sup></span> developed a prior where the distance between the prior and posterior is maximized by making use of the chi-square divergence, whereas a reference prior is the prior distribution that maximizes the Kullback–Leibler divergence between the prior and the posterior distribution. When other distances are used the Jeffreys prior is the result with adequate first order approximations but with the chi-square distance the second order approximations give this prior. Second order approximations is used since chi-square divergence approximations of the first order does not give priors. In other cases where other divergence measures are used, first order approximations gives priors that are adequate.</p><p>A distinction between weakly informative priors and noninformative priors is also given. A weakly informative prior as opposed to a noninformative prior is used when the prior influences the posterior mildly as opposed to having no influence on the posterior. The authors provide a very useful table of recommended prior distributions for log-location-scale distribution parameters, where it is clearly given and discussed what type of prior (informative, noninformative, or informative weakly informative) and prior distribution inputs are needed. A simulation study is done to investigate the coverage probability. When using complete data and Type 2 censored data from log-location-scale distribution, the independence Jeffreys prior have coverage rates that are the same as the nominal confidence level, and when using Type 2 and random censoring the independence Jeffreys prior have coverage rates that are close to ","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2806","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48526051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Di Persio, D. Mancinelli, Immacolata Oliva, K. Wallbaum
We introduce a new exotic option to be used within structured products to address a key disadvantage of standard time-invariant portfolio protection: the well-known cash-lock risk. Our approach suggests enriching the framework by including a threshold in the allocation mechanism so that a guaranteed minimum equity exposure (GMEE) is ensured at any point in time. To be able to offer such a solution still with hard capital protection, we apply an option-based structure with a dynamic allocation logic as underlying. We provide an in-depth analysis of the prices of such new exotic options, assuming a Heston–Vasicek-type financial market model, and compare our results with other options used within structured products. Our approach represents an interesting alternative for investors aiming at downsizing protection via time-invariant portfolio protection strategies, meanwhile being also afraid to experience a cash-lock event triggered by market turmoils.
{"title":"Time-invariant portfolio strategies in structured products with guaranteed minimum equity exposure","authors":"Luca Di Persio, D. Mancinelli, Immacolata Oliva, K. Wallbaum","doi":"10.1002/asmb.2805","DOIUrl":"10.1002/asmb.2805","url":null,"abstract":"<p>We introduce a new exotic option to be used within structured products to address a key disadvantage of standard time-invariant portfolio protection: the well-known <i>cash-lock risk.</i> Our approach suggests enriching the framework by including a threshold in the allocation mechanism so that a guaranteed minimum equity exposure (GMEE) is ensured at any point in time. To be able to offer such a solution still with hard capital protection, we apply an option-based structure with a dynamic allocation logic as underlying. We provide an in-depth analysis of the prices of such new exotic options, assuming a Heston–Vasicek-type financial market model, and compare our results with other options used within structured products. Our approach represents an interesting alternative for investors aiming at downsizing protection via time-invariant portfolio protection strategies, meanwhile being also afraid to experience a cash-lock event triggered by market turmoils.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2805","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43267143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comments on “Specifying prior distributions in reliability analysis” by Qinglong Tian, Colin Lewis-Beck, Jarad B. Niemi, and William W. Meeker","authors":"Debasis Kundu","doi":"10.1002/asmb.2804","DOIUrl":"10.1002/asmb.2804","url":null,"abstract":"","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49125973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of automated driving systems (ADS) has been a remarkable technological leap in recent times, holding tremendous potential to revolutionize mobility, minimize energy usage, and enhance safety on our roads. The present paper serves as a valuable contribution, addressing crucial aspects that underscore the significance of statistics in ADS applications and the accompanying challenges. I believe this important work will ignite further statistical research in the field and, consequently, foster the advancement of ADS development.
The request to intervene (RtI) is an indispensable component for Level 3 and Level 4 ADSs when faced with situations that surpass their design capabilities. In this context, figure 2 and algorithm 1 offer a concise and lucid depiction of the statistical framework and crucial factors involved in RtI. The Bayesian method proves valuable not only in ADS development but also as a complementary “white-box” algorithm alongside black-box algorithms, providing interpretable actions. One major complication is whether the driver's reaction time is sufficient to take on the RtI. The paper proposed a model to predict latent driver state based on driver state and environment monitoring, which is crucial in the decision to issue an RtI request. Due to the heterogeneity among human drivers, the reaction time can vary substantially under identical driving scenarios; for example, senior drivers might require more time to react.1 How to incorporate individual variation under critical situations is an interesting problem.
During emergency scenarios, coming to a complete stop may not always be the optimal course of action. Instead, ADS should consider safety options called “Minimum Risk Maneuver” or MRM.2 Identification and comparison of MRM is a challenging statistical problem.
Section 4 offers an insightful exploration of the statistical aspect of ethical decision-making in ADSs, which has remained a significant concern since the inception of ADS development. Ethical challenges in ADSs encompass more than the mere existence of ethical issues, as exemplified by Asimov's three laws of robotics that prioritize the avoidance of harm to humans. An intriguing scenario presented by Awad et al.3 presents a dilemma: if an ADS cannot find a trajectory that would save everyone involved, should it prioritize hitting a teenage pedestrian over three elderly passengers? This situation forces the ADS to make a decision regarding which human lives to potentially harm, highlighting the complexity of ethical decision-making in ADS.
The utilization of game theory and adversarial risk analysis (ARA) methodology to model ADS behavior in mixed traffic with human drivers is a brilliant approach for this paper. The perception of other road users' intentions and the environment can be achieved through either an ego-centric method, which relies on ADS onboard
{"title":"Discussion of “some statistical challenges in automated driving systems”","authors":"Feng Guo","doi":"10.1002/asmb.2802","DOIUrl":"10.1002/asmb.2802","url":null,"abstract":"<p>The emergence of automated driving systems (ADS) has been a remarkable technological leap in recent times, holding tremendous potential to revolutionize mobility, minimize energy usage, and enhance safety on our roads. The present paper serves as a valuable contribution, addressing crucial aspects that underscore the significance of statistics in ADS applications and the accompanying challenges. I believe this important work will ignite further statistical research in the field and, consequently, foster the advancement of ADS development.</p><p>The request to intervene (RtI) is an indispensable component for Level 3 and Level 4 ADSs when faced with situations that surpass their design capabilities. In this context, figure 2 and algorithm 1 offer a concise and lucid depiction of the statistical framework and crucial factors involved in RtI. The Bayesian method proves valuable not only in ADS development but also as a complementary “white-box” algorithm alongside black-box algorithms, providing interpretable actions. One major complication is whether the driver's reaction time is sufficient to take on the RtI. The paper proposed a model to predict latent driver state based on driver state and environment monitoring, which is crucial in the decision to issue an RtI request. Due to the heterogeneity among human drivers, the reaction time can vary substantially under identical driving scenarios; for example, senior drivers might require more time to react.<span><sup>1</sup></span> How to incorporate individual variation under critical situations is an interesting problem.</p><p>During emergency scenarios, coming to a complete stop may not always be the optimal course of action. Instead, ADS should consider safety options called “Minimum Risk Maneuver” or MRM.<span><sup>2</sup></span> Identification and comparison of MRM is a challenging statistical problem.</p><p>Section 4 offers an insightful exploration of the statistical aspect of ethical decision-making in ADSs, which has remained a significant concern since the inception of ADS development. Ethical challenges in ADSs encompass more than the mere existence of ethical issues, as exemplified by Asimov's three laws of robotics that prioritize the avoidance of harm to humans. An intriguing scenario presented by Awad et al.<span><sup>3</sup></span> presents a dilemma: if an ADS cannot find a trajectory that would save everyone involved, should it prioritize hitting a teenage pedestrian over three elderly passengers? This situation forces the ADS to make a decision regarding which human lives to potentially harm, highlighting the complexity of ethical decision-making in ADS.</p><p>The utilization of game theory and adversarial risk analysis (ARA) methodology to model ADS behavior in mixed traffic with human drivers is a brilliant approach for this paper. The perception of other road users' intentions and the environment can be achieved through either an ego-centric method, which relies on ADS onboard","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46528760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various random effects models have been developed for clustered binary data; however, traditional approaches to these models generally rely heavily on the specification of a continuous random effect distribution such as Gaussian or beta distribution. In this article, we introduce a new model that incorporates nonparametric unobserved random effects on unit interval (0,1) into logistic regression multiplicatively with fixed effects. This new multiplicative model setup facilitates prediction of our nonparametric random effects and corresponding model interpretations. A distinctive feature of our approach is that a closed-form expression has been derived for the predictor of nonparametric random effects on unit interval (0,1) in terms of known covariates and responses. A quasi-likelihood approach has been developed in the estimation of our model. Our results are robust against random effects distributions from very discrete binary to continuous beta distributions. We illustrate our method by analyzing recent large stock crash data in China. The performance of our method is also evaluated through simulation studies.
{"title":"Modeling clustered binary data with nonparametric unobserved heterogeneity: An application to stock crash analysis","authors":"Ruixi Zhao, Renjun Ma, Guohua Yan, Haomiao Niu, Wenjiang Jiang","doi":"10.1002/asmb.2801","DOIUrl":"10.1002/asmb.2801","url":null,"abstract":"<p>Various random effects models have been developed for clustered binary data; however, traditional approaches to these models generally rely heavily on the specification of a continuous random effect distribution such as Gaussian or beta distribution. In this article, we introduce a new model that incorporates nonparametric unobserved random effects on unit interval (0,1) into logistic regression multiplicatively with fixed effects. This new multiplicative model setup facilitates prediction of our nonparametric random effects and corresponding model interpretations. A distinctive feature of our approach is that a closed-form expression has been derived for the predictor of nonparametric random effects on unit interval (0,1) in terms of known covariates and responses. A quasi-likelihood approach has been developed in the estimation of our model. Our results are robust against random effects distributions from very discrete binary to continuous beta distributions. We illustrate our method by analyzing recent large stock crash data in China. The performance of our method is also evaluated through simulation studies.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46149683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inez Maria Zwetsloot, Tahir Mahmood, Funmilola Mary Taiwo, Zezhong Wang
Early detection of changes in the frequency of events is an important task in many fields, such as disease surveillance, monitoring of high-quality processes, reliability monitoring, and public health. This article focuses on detecting changes in multivariate event data by monitoring the time-between-events (TBE). Existing multivariate TBE charts are limited because they only signal after an event occurred for each of the individual processes. This results in delays (i.e., long time-to-signal), especially when we are interested in detecting a change in one or a few processes with different rates. We propose a bivariate TBE chart, which can signal in real-time. We derive analytical expressions for the control limits and average time-to-signal performance, conduct a performance evaluation and compare our chart to an existing method. Our findings showed that our method is an effective approach for monitoring bivariate TBE data and has better detection ability than the existing method under transient shifts and is more generally applicable. A significant benefit of our method is that it signals in real-time and that the control limits are based on analytical expressions. The proposed method is implemented on two real-life datasets from reliability and health surveillance.
{"title":"A real-time monitoring approach for bivariate event data","authors":"Inez Maria Zwetsloot, Tahir Mahmood, Funmilola Mary Taiwo, Zezhong Wang","doi":"10.1002/asmb.2800","DOIUrl":"10.1002/asmb.2800","url":null,"abstract":"<p>Early detection of changes in the frequency of events is an important task in many fields, such as disease surveillance, monitoring of high-quality processes, reliability monitoring, and public health. This article focuses on detecting changes in multivariate event data by monitoring the time-between-events (TBE). Existing multivariate TBE charts are limited because they only signal after an event occurred for each of the individual processes. This results in delays (i.e., long time-to-signal), especially when we are interested in detecting a change in one or a few processes with different rates. We propose a bivariate TBE chart, which can signal in real-time. We derive analytical expressions for the control limits and average time-to-signal performance, conduct a performance evaluation and compare our chart to an existing method. Our findings showed that our method is an effective approach for monitoring bivariate TBE data and has better detection ability than the existing method under transient shifts and is more generally applicable. A significant benefit of our method is that it signals in real-time and that the control limits are based on analytical expressions. The proposed method is implemented on two real-life datasets from reliability and health surveillance.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45621165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a new bivariate model for the joint description of the Bitcoin prices and the media attention to Bitcoin. Our model is based on the class of the Lévy processes and is able to realistically reproduce the jump-type dynamics of the considered time series. We focus on the low-frequency setup, which is for the Lévy-based models essentially more difficult than the high-frequency case. We design a semiparametric estimation procedure for the statistical inference on the parameters and the Lévy measures of the considered processes. We show that the dynamics of the market attention can be effectively modelled by the Lévy processes with finite Lévy measures, and propose a data-driven procedure for the description of the Bitcoin prices.
{"title":"Modelling the Bitcoin prices and media attention to Bitcoin via the jump-type processes","authors":"Ekaterina Morozova, Vladimir Panov","doi":"10.1002/asmb.2798","DOIUrl":"10.1002/asmb.2798","url":null,"abstract":"<p>In this paper, we present a new bivariate model for the joint description of the Bitcoin prices and the media attention to Bitcoin. Our model is based on the class of the Lévy processes and is able to realistically reproduce the jump-type dynamics of the considered time series. We focus on the low-frequency setup, which is for the Lévy-based models essentially more difficult than the high-frequency case. We design a semiparametric estimation procedure for the statistical inference on the parameters and the Lévy measures of the considered processes. We show that the dynamics of the market attention can be effectively modelled by the Lévy processes with finite Lévy measures, and propose a data-driven procedure for the description of the Bitcoin prices.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47231050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper Specifying Prior Distributions in Reliability Applications mainly provides an overview of methods for selecting non-informative prior distributions for parameters of basic lifetime distributions, as often used in reliability analyses. This discussion raises some related issues and comments on opportunities beyond basic Bayesian statistical methods which may be useful in reliability scenarios. The main emphasis in this discussion is on practical reliability analyses with few data available, where there is often need for informative priors rather than for non-informative priors, in order to take expert judgement into account. Furthermore, while rather abstract considerations of non-informativeness of prior distributions is of theoretic interest, in most practical scenarios one aims at decision support, and the influence of assumed priors on the final decisions should be considered, ideally with robustness of the final decision with regard to all priors which are deemed to be reasonable.
{"title":"Discussion of specifying prior distributions in reliability applications","authors":"Frank P.A. Coolen","doi":"10.1002/asmb.2799","DOIUrl":"10.1002/asmb.2799","url":null,"abstract":"<p>The paper <i>Specifying Prior Distributions in Reliability Applications</i> mainly provides an overview of methods for selecting non-informative prior distributions for parameters of basic lifetime distributions, as often used in reliability analyses. This discussion raises some related issues and comments on opportunities beyond basic Bayesian statistical methods which may be useful in reliability scenarios. The main emphasis in this discussion is on practical reliability analyses with few data available, where there is often need for informative priors rather than for non-informative priors, in order to take expert judgement into account. Furthermore, while rather abstract considerations of non-informativeness of prior distributions is of theoretic interest, in most practical scenarios one aims at decision support, and the influence of assumed priors on the final decisions should be considered, ideally with robustness of the final decision with regard to all priors which are deemed to be reasonable.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46135683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This is a thorough review of approaches to prior elicitation in reliability and includes some extensive illustrations of the approaches. For me, this article is both a very useful reference document and can act as a good primer for new students in the reliability field who would like to understand better how prior elicitation can be undertaken in reliability applications.
The focus is largely on uninformative priors and the various approaches in which the idea of lack of background information about a parameter can be realised. Since statistical reliability largely uses probability models with few (2 or 3 is typical) parameters that are common across many fields of application, it is not surprising that these are the approaches that we see generally in the Bayesian literature when trying to specify a lack of background information.
The various problems with non-informative priors are well known. For the case of a ‘random sample’ of data to be analysed, the noninformative prior methods of this paper will tend to work well and more specifically in the small data case that is emphasised. However, it should be noted that they can start to work in misleading ways in more complex data situations which one can see in reliability settings. For example, in hierarchical models, non-informative parameters on scale parameters can lead to inferences that describe the data as entirely noise.1 Model comparison, for example using Bayes factors, can also be problematic.2 In these cases, as the authors point out, priors that avoid assigning belief to implausible values become important.
No doubt a separate paper can be written on prior specification under these more complex models, and the pitfalls therein. I thank the authors for bringing together a comprehensive study of prior elicitation in reliability applications.
{"title":"Discussion of specifying prior distributions in reliability applications","authors":"Simon Wilson","doi":"10.1002/asmb.2795","DOIUrl":"10.1002/asmb.2795","url":null,"abstract":"<p>This is a thorough review of approaches to prior elicitation in reliability and includes some extensive illustrations of the approaches. For me, this article is both a very useful reference document and can act as a good primer for new students in the reliability field who would like to understand better how prior elicitation can be undertaken in reliability applications.</p><p>The focus is largely on uninformative priors and the various approaches in which the idea of lack of background information about a parameter can be realised. Since statistical reliability largely uses probability models with few (2 or 3 is typical) parameters that are common across many fields of application, it is not surprising that these are the approaches that we see generally in the Bayesian literature when trying to specify a lack of background information.</p><p>The various problems with non-informative priors are well known. For the case of a ‘random sample’ of data to be analysed, the noninformative prior methods of this paper will tend to work well and more specifically in the small data case that is emphasised. However, it should be noted that they can start to work in misleading ways in more complex data situations which one can see in reliability settings. For example, in hierarchical models, non-informative parameters on scale parameters can lead to inferences that describe the data as entirely noise.<span><sup>1</sup></span> Model comparison, for example using Bayes factors, can also be problematic.<span><sup>2</sup></span> In these cases, as the authors point out, priors that avoid assigning belief to implausible values become important.</p><p>No doubt a separate paper can be written on prior specification under these more complex models, and the pitfalls therein. I thank the authors for bringing together a comprehensive study of prior elicitation in reliability applications.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2795","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51687477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}