Alfonso Guarino, Luca Grilli, Domenico Santoro, Francesco Messina, Rocco Zaccagnino
This article analyzes the trading strategies of five state-of-the-art agents based on reinforcement learning on six commodity futures: brent oil, corn, gold, coal, natural gas, and sugar. Some of these were chosen because of the periods considered (when they became essential commodities), that is, before and after the 2022 Russia–Ukraine conflict. Agents behavior was assessed using a series of financial indicators, and the trader with the best strategy was selected. Top traders' behavior helped train our recently introduced neuro-fuzzy agent, which adjusted its trading strategy through “herd behavior.” The results highlight how the reinforcement learning agents performed excellently and how our neuro-fuzzy trader could improve its strategy using competitor movement information. Finally, we performed experiments with and without transaction costs, observing that, despite these costs, there are fewer transactions. Moreover, the intelligent agents' performances are outstanding and surpassed by the neuro-fuzzy agent.
{"title":"On the efficacy of “herd behavior” in the commodities market: A neuro-fuzzy agent “herding” on deep learning traders","authors":"Alfonso Guarino, Luca Grilli, Domenico Santoro, Francesco Messina, Rocco Zaccagnino","doi":"10.1002/asmb.2793","DOIUrl":"10.1002/asmb.2793","url":null,"abstract":"<p>This article analyzes the trading strategies of five state-of-the-art agents based on reinforcement learning on six commodity futures: brent oil, corn, gold, coal, natural gas, and sugar. Some of these were chosen because of the periods considered (when they became essential commodities), that is, before and after the 2022 Russia–Ukraine conflict. Agents behavior was assessed using a series of financial indicators, and the trader with the best strategy was selected. Top traders' behavior helped train our recently introduced neuro-fuzzy agent, which adjusted its trading strategy through “herd behavior.” The results highlight how the reinforcement learning agents performed excellently and how our neuro-fuzzy trader could improve its strategy using competitor movement information. Finally, we performed experiments with and without transaction costs, observing that, despite these costs, there are fewer transactions. Moreover, the intelligent agents' performances are outstanding and surpassed by the neuro-fuzzy agent.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44627589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discussion of “Some statistical challenges in automated driving systems”","authors":"David Banks, Yen-Chun Liu","doi":"10.1002/asmb.2797","DOIUrl":"10.1002/asmb.2797","url":null,"abstract":"","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44891894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article by Tian et al. (Appl. Stoch. Models Bus. Ind. 2023) takes an interesting look at the use of non-informative priors adapted to several censoring processes, which are common in reliability. It proposes a continuum of modelling approaches that go as far as defining weakly informative priors to overcome the well-known shortcomings of frequentist approaches to problems involving highly censored samples. In this commentary, I make some critical remarks and propose to link this work to a more generic vision of what could be a relevant Bayesian elicitation in reliability, taking advantage of recent theoretical and applied advances. Through tools like approximate posterior priors and prior equivalent sample sizes, and by illustrating them with simple reliability models, I suggest methodological avenues to formalize the elicitation of informative priors in a auditable, defensible way. By allowing a clear modulation of subjective information, this might respond to the authors' primary concern of constructing weakly informative priors and to a more general concern for precaution in Bayesian reliability.
{"title":"Discussion of “Specifying prior distributions in reliability applications”: Towards new formal rules for informative prior elicitation?","authors":"Nicolas Bousquet","doi":"10.1002/asmb.2794","DOIUrl":"10.1002/asmb.2794","url":null,"abstract":"<p>The article by Tian et al. (Appl. Stoch. Models Bus. Ind. 2023) takes an interesting look at the use of non-informative priors adapted to several censoring processes, which are common in reliability. It proposes a continuum of modelling approaches that go as far as defining weakly informative priors to overcome the well-known shortcomings of frequentist approaches to problems involving highly censored samples. In this commentary, I make some critical remarks and propose to link this work to a more generic vision of what could be a relevant Bayesian elicitation in reliability, taking advantage of recent theoretical and applied advances. Through tools like approximate posterior priors and prior equivalent sample sizes, and by illustrating them with simple reliability models, I suggest methodological avenues to formalize the elicitation of informative priors in a auditable, defensible way. By allowing a clear modulation of subjective information, this might respond to the authors' primary concern of constructing weakly informative priors and to a more general concern for precaution in Bayesian reliability.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44105223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discussion of “Specifying prior distributions in reliability applications,” by Qinglong Tian, Colin Lewis-Beck, Jarad B. Niemi, and William Meeker","authors":"Necip Doganaksoy, Steven E. Rigdon","doi":"10.1002/asmb.2796","DOIUrl":"10.1002/asmb.2796","url":null,"abstract":"","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45948331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roy Cerqueti, Raffaele Mattera, Alessandro Ramponi
This paper proposes a probabilistic model for the evaluation of the peak components of the return of a commodity. The ground of the study lies in the evidence that the spikes in the returns are due to the shocks occurring in the external environment. We follow an approach based on a particular class of point processes—the Spatial Mixed Poisson Processes—by exploiting an invariance property for such a class. The theoretical framework is used for presenting an estimation the procedure of the returns based on the available information. An empirical instance based on different commodities' returns and the abnormal returns of the volatility index as external shocks are presented to motivate our theoretical approach.
{"title":"A stochastic model for evaluating the peaks of commodities' returns","authors":"Roy Cerqueti, Raffaele Mattera, Alessandro Ramponi","doi":"10.1002/asmb.2790","DOIUrl":"10.1002/asmb.2790","url":null,"abstract":"<p>This paper proposes a probabilistic model for the evaluation of the peak components of the return of a commodity. The ground of the study lies in the evidence that the spikes in the returns are due to the shocks occurring in the external environment. We follow an approach based on a particular class of point processes—the Spatial Mixed Poisson Processes—by exploiting an invariance property for such a class. The theoretical framework is used for presenting an estimation the procedure of the returns based on the available information. An empirical instance based on different commodities' returns and the abnormal returns of the volatility index as external shocks are presented to motivate our theoretical approach.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.2790","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42049098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we introduce a novel Bayesian approach for linking multiple social networks in order to discover the same real world person having different accounts across networks. In particular, we develop a latent model that allows us to jointly characterize the network and linkage structures relying on both relational and profile data. In contrast to other existing approaches in the machine learning literature, our Bayesian implementation naturally provides uncertainty quantification via posterior probabilities for the linkage structure itself or any function of it. Our findings clearly suggest that our methodology can produce accurate point estimates of the linkage structure even in the absence of profile information, and also, in an identity resolution setting, our results confirm that including relational data into the matching process improves the linkage accuracy. We illustrate our methodology using real data from popular social networks such as Twitter, Facebook, and YouTube.
{"title":"A Bayesian record linkage model incorporating relational data","authors":"Juan Sosa, Abel Rodríguez","doi":"10.1002/asmb.2792","DOIUrl":"10.1002/asmb.2792","url":null,"abstract":"<p>In this article, we introduce a novel Bayesian approach for linking multiple social networks in order to discover the same real world person having different accounts across networks. In particular, we develop a latent model that allows us to jointly characterize the network and linkage structures relying on both relational and profile data. In contrast to other existing approaches in the machine learning literature, our Bayesian implementation naturally provides uncertainty quantification via posterior probabilities for the linkage structure itself or any function of it. Our findings clearly suggest that our methodology can produce accurate point estimates of the linkage structure even in the absence of profile information, and also, in an identity resolution setting, our results confirm that including relational data into the matching process improves the linkage accuracy. We illustrate our methodology using real data from popular social networks such as <span>Twitter</span>, <span>Facebook</span>, and <span>YouTube</span>.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47046364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This interesting paper by Tian et al. presents a comprehensive investigation of non-informative and weakly informative priors for two parameter (log-location and scale) failure distributions. They provide helpful and practical advice to the Bayesian analyst on the selection of appropriate priors and specifically on the avoidance of posterior estimates that are unrealistic, particularly where data are sparse.
The motivating examples provide challenging settings where the information provided by the data is extremely slight. These settings are typical of systems engineered to be very high reliable, where failure data are minimal by design, but where inferences about failure risk are critical. These are also precisely the settings where default choices for noninformative priors may be unexpectedly influential,1 leading either to improper posteriors, or to posteriors which place significant mass in regions which are implausible. The authors' fundamental principle (§5.4) of ensuring that the priors always be constructed to avoid this consequence is very well stated, and one which will bear much repetition in other forums.
We have only one main point to make. It relates to their statement in the abstract that ‘for Bayesian inference, there is only one method of constructing equal-tailed credible intervals—but it is necesssary to provide a prior distribution to full specify the model.’ We agree, but our view is that the statement is incomplete: the model must have been chosen to begin with. Although this is not the main point of the paper, the consequences of model choice can be considerable, particularly when all of the inferential action is being carried out on the tails of the distribution, where only a few percent of failures may ever be observed to occur.
In this spirit we have reproduced in our Figure 1 the authors' Weibull probability plot (their Figure 1) of the Bearing Cage failure data.2 The estimated parameters of the original Weibull fit are