Pub Date : 2024-05-04DOI: 10.1007/s10614-024-10592-7
Hongyu An, Boping Tian
Understanding why extreme events occur is crucial in many fields, particularly in managing financial market risk. In order to explain such occurrences, it is necessary to use explanatory variables. However, flexible models with explanatory variables are severely lacking in financial market risk management, particularly when the variables are sampled at different frequencies. To address this gap, this article proposes a novel dynamic tail index regression model based on mixed-frequency data, which enables the high-frequency variable of interest to depend on both high- and low-frequency variables within the framework of extreme value regression. Specifically, it concurrently leverages information from low-frequency macroeconomic variables and high-frequency market variables to model the tail distribution of high-frequency returns, consequently enabling the computation of high-frequency Value at Risk and Expected Shortfall. Monte Carlo simulations and empirical studies show that the proposed method effectively models stock market tail risk and produces satisfactory forecasts. Moreover, including macroeconomic variables in the model provides insights for macroprudential regulation.
{"title":"Unleashing the Potential of Mixed Frequency Data: Measuring Risk with Dynamic Tail Index Regression Model","authors":"Hongyu An, Boping Tian","doi":"10.1007/s10614-024-10592-7","DOIUrl":"https://doi.org/10.1007/s10614-024-10592-7","url":null,"abstract":"<p>Understanding why extreme events occur is crucial in many fields, particularly in managing financial market risk. In order to explain such occurrences, it is necessary to use explanatory variables. However, flexible models with explanatory variables are severely lacking in financial market risk management, particularly when the variables are sampled at different frequencies. To address this gap, this article proposes a novel dynamic tail index regression model based on mixed-frequency data, which enables the high-frequency variable of interest to depend on both high- and low-frequency variables within the framework of extreme value regression. Specifically, it concurrently leverages information from low-frequency macroeconomic variables and high-frequency market variables to model the tail distribution of high-frequency returns, consequently enabling the computation of high-frequency Value at Risk and Expected Shortfall. Monte Carlo simulations and empirical studies show that the proposed method effectively models stock market tail risk and produces satisfactory forecasts. Moreover, including macroeconomic variables in the model provides insights for macroprudential regulation.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"44 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-04DOI: 10.1007/s10614-024-10618-0
Alexandre Momparler, Pedro Carmona, Francisco Climent
In today’s dynamic financial landscape, the integration of environmental, social, and governance (ESG) principles into investment strategies has gained great significance. Investors and financial advisors are increasingly confronted with the crucial question of whether their dedication to ESG values enhances or hampers their pursuit of financial performance. Addressing this crucial issue, our research delves into the impact of ESG ratings on financial performance, exploring a cutting-edge machine learning approach powered by the Extreme Gradient algorithm. Our study centers on US-registered equity funds with a global investment scope, and performs a cross-sectional data analysis for annualized fund returns for a five-year period (2017–2021). To fortify our analysis, we synergistically amalgamate data from three prominent mutual fund databases, thereby bolstering data completeness, accuracy, and consistency. Through thorough examination, our findings substantiate the positive correlation between ESG ratings and fund performance. In fact, our investigation identifies ESG score as one of the dominant variables, ranking among the top five with the highest predictive capacity for mutual fund performance. As sustainable investing continues to ascend as a central force within financial markets, our study underscores the pivotal role that ESG factors play in shaping investment outcomes. Our research provides socially responsible investors and financial advisors with valuable insights, empowering them to make informed decisions that align their financial objectives with their commitment to ESG values.
{"title":"Catalyzing Sustainable Investment: Revealing ESG Power in Predicting Fund Performance with Machine Learning","authors":"Alexandre Momparler, Pedro Carmona, Francisco Climent","doi":"10.1007/s10614-024-10618-0","DOIUrl":"https://doi.org/10.1007/s10614-024-10618-0","url":null,"abstract":"<p>In today’s dynamic financial landscape, the integration of environmental, social, and governance (ESG) principles into investment strategies has gained great significance. Investors and financial advisors are increasingly confronted with the crucial question of whether their dedication to ESG values enhances or hampers their pursuit of financial performance. Addressing this crucial issue, our research delves into the impact of ESG ratings on financial performance, exploring a cutting-edge machine learning approach powered by the Extreme Gradient algorithm. Our study centers on US-registered equity funds with a global investment scope, and performs a cross-sectional data analysis for annualized fund returns for a five-year period (2017–2021). To fortify our analysis, we synergistically amalgamate data from three prominent mutual fund databases, thereby bolstering data completeness, accuracy, and consistency. Through thorough examination, our findings substantiate the positive correlation between ESG ratings and fund performance. In fact, our investigation identifies ESG score as one of the dominant variables, ranking among the top five with the highest predictive capacity for mutual fund performance. As sustainable investing continues to ascend as a central force within financial markets, our study underscores the pivotal role that ESG factors play in shaping investment outcomes. Our research provides socially responsible investors and financial advisors with valuable insights, empowering them to make informed decisions that align their financial objectives with their commitment to ESG values.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"252 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03DOI: 10.1007/s10614-024-10610-8
Havisha Jahajeeah, Aslam A. E. F. Saib
The Greymodels package presents an interactive interface in R for the statistical modelling and forecasting of incomplete or small datasets using grey models. The package, based on the Shiny framework, has been designed to work with univariate and multivariate datasets having different properties and characteristics. The functionality of the package is demonstrated with a few examples and in particular, the user-friendly interface is shown to allow users to easily compare the performance of different models for prediction and among others, visualize graphical plots of predicted values within a user chosen confidence interval. The built-in algorithms in the Greymodels package are extensions or hybrids of the GM((1,,1)) model, and this article covers an overview of the theoretical background of the basic grey model and we also propose a PSO-GM((1,,1)) algorithm in this package.
Greymodels 软件包为使用灰色模型对不完整或小型数据集进行统计建模和预测提供了一个 R 语言交互界面。该软件包基于 Shiny 框架,设计用于处理具有不同属性和特征的单变量和多变量数据集。该软件包的功能通过几个示例进行了演示,尤其是用户友好界面的展示,让用户可以轻松比较不同预测模型的性能,并在用户选择的置信区间内可视化预测值的图形图表。Greymodels软件包中的内置算法是GM/((1,,1))模型的扩展或混合,本文概述了基本灰色模型的理论背景,我们还提出了该软件包中的PSO-GM/((1,,1))算法。
{"title":"Greymodels: A Shiny Package for Grey Forecasting Models in R","authors":"Havisha Jahajeeah, Aslam A. E. F. Saib","doi":"10.1007/s10614-024-10610-8","DOIUrl":"https://doi.org/10.1007/s10614-024-10610-8","url":null,"abstract":"<p>The <span>Greymodels</span> package presents an interactive interface in R for the statistical modelling and forecasting of incomplete or small datasets using grey models. The package, based on the <span>Shiny</span> framework, has been designed to work with univariate and multivariate datasets having different properties and characteristics. The functionality of the package is demonstrated with a few examples and in particular, the user-friendly interface is shown to allow users to easily compare the performance of different models for prediction and among others, visualize graphical plots of predicted values within a user chosen confidence interval. The built-in algorithms in the <span>Greymodels</span> package are extensions or hybrids of the GM<span>((1,,1))</span> model, and this article covers an overview of the theoretical background of the basic grey model and we also propose a PSO-GM<span>((1,,1))</span> algorithm in this package.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"2 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03DOI: 10.1007/s10614-024-10596-3
Keyvan Eslami, Thomas Phelan
A recent literature within quantitative macroeconomics has advocated the use of continuous-time methods for dynamic programming problems. In this paper we explore the relative merits of continuous-time and discrete-time methods within the context of stationary and nonstationary income fluctuation problems. For stationary problems in two dimensions, the continuous-time approach is both more stable and typically faster than the discrete-time approach for any given level of accuracy. In contrast, for concave lifecycle problems (in which age or time enters explicitly), simply iterating backwards from the terminal date in discrete time is superior to any continuous-time algorithm. However, we also show that the continuous-time framework can easily incorporate nonconvexities and multiple controls—complications that often require either problem-specific ingenuity or nonlinear root-finding in the discrete-time context. In general, neither approach unequivocally dominates the other, making the choice of one over the other an art, rather than an exact science.
{"title":"The Art of Temporal Approximation: An Investigation into Numerical Solutions to Discrete- and Continuous-Time Problems in Economics","authors":"Keyvan Eslami, Thomas Phelan","doi":"10.1007/s10614-024-10596-3","DOIUrl":"https://doi.org/10.1007/s10614-024-10596-3","url":null,"abstract":"<p>A recent literature within quantitative macroeconomics has advocated the use of continuous-time methods for dynamic programming problems. In this paper we explore the relative merits of continuous-time and discrete-time methods within the context of stationary and nonstationary income fluctuation problems. For stationary problems in two dimensions, the continuous-time approach is both more stable and typically faster than the discrete-time approach for any given level of accuracy. In contrast, for concave lifecycle problems (in which age or time enters explicitly), simply iterating backwards from the terminal date in discrete time is superior to any continuous-time algorithm. However, we also show that the continuous-time framework can easily incorporate nonconvexities and multiple controls—complications that often require either problem-specific ingenuity or nonlinear root-finding in the discrete-time context. In general, neither approach unequivocally dominates the other, making the choice of one over the other an art, rather than an exact science.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"33 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1007/s10614-024-10604-6
Xavier Martínez-Barbero, Roberto Cervelló-Royo, Javier Ribal
In recent years, artificial intelligence has helped to improve processes and performance in many different areas: in the field of portfolio optimization, the inputs play a crucial role, and the use of machine learning algorithms can improve the estimation of the inputs to create robust portfolios able to generate returns consistently. This paper combines classical mean–variance optimization and machine learning techniques, concretely long short-term memory neural networks to provide more accurate predicted returns and generate profitable portfolios for 10 holding periods that present different financial contexts. The proposed algorithm is trained and tested with historical EURO STOXX 50® Index data from January 2015 to December 2020, and from January 2021 to June 2022, respectively. Empirical results show that our LSTM neural networks are able to achieve minor predictive errors since the average of the MSE of the 10 holding periods is 0.00047, the average of the MAE is 0.01634, and predict the direction of returns with an average accuracy over the 10 investment periods of 95.8%. Our prediction-based portfolios consistently beat the EURO STOXX 50® Index, achieving superior positive results even during bear markets.
{"title":"Portfolio Optimization with Prediction-Based Return Using Long Short-Term Memory Neural Networks: Testing on Upward and Downward European Markets","authors":"Xavier Martínez-Barbero, Roberto Cervelló-Royo, Javier Ribal","doi":"10.1007/s10614-024-10604-6","DOIUrl":"https://doi.org/10.1007/s10614-024-10604-6","url":null,"abstract":"<p>In recent years, artificial intelligence has helped to improve processes and performance in many different areas: in the field of portfolio optimization, the inputs play a crucial role, and the use of machine learning algorithms can improve the estimation of the inputs to create robust portfolios able to generate returns consistently. This paper combines classical mean–variance optimization and machine learning techniques, concretely long short-term memory neural networks to provide more accurate predicted returns and generate profitable portfolios for 10 holding periods that present different financial contexts. The proposed algorithm is trained and tested with historical EURO STOXX 50® Index data from January 2015 to December 2020, and from January 2021 to June 2022, respectively. Empirical results show that our LSTM neural networks are able to achieve minor predictive errors since the average of the MSE of the 10 holding periods is 0.00047, the average of the MAE is 0.01634, and predict the direction of returns with an average accuracy over the 10 investment periods of 95.8%. Our prediction-based portfolios consistently beat the EURO STOXX 50® Index, achieving superior positive results even during bear markets.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"61 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140832093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s10614-024-10598-1
Jongwoo Choi, Seongil Jo, Jaeoh Kim
This paper proposes a Bayesian varying coefficient model to estimate parameters exhibiting time-dependence in the Cobb–Douglas (CD) production function. We expand upon the classical CD production function by incorporating time-varying properties to enable more sophisticated modeling. We utilize a flexible and efficient Bayesian approach-based computational algorithm for statistical inference in the constrained parameter space, where the sum of model elasticities must be less than 1. The proposed model is applied to four real datasets from macroeconomics, as well as various social science issues broadly covered by the CD production function. The real data applications demonstrate the effectiveness of the proposed model in estimating underlying time-varying effects for parameters in the CD production function.
本文提出了一种贝叶斯变化系数模型,用于估计柯布-道格拉斯(CD)生产函数中表现出时间依赖性的参数。我们在经典的 CD 生产函数的基础上,加入了时变特性,以实现更复杂的建模。我们利用基于贝叶斯方法的灵活高效的计算算法,在受限参数空间内进行统计推断,其中模型弹性之和必须小于 1。 我们将提出的模型应用于宏观经济学的四个真实数据集,以及 CD 生产函数广泛涵盖的各种社会科学问题。真实数据的应用证明了所提出的模型在估计 CD 生产函数参数的潜在时变效应方面的有效性。
{"title":"A Bayesian Time-Varying Coefficient Model for Cobb–Douglas Production Function","authors":"Jongwoo Choi, Seongil Jo, Jaeoh Kim","doi":"10.1007/s10614-024-10598-1","DOIUrl":"https://doi.org/10.1007/s10614-024-10598-1","url":null,"abstract":"<p>This paper proposes a Bayesian varying coefficient model to estimate parameters exhibiting time-dependence in the Cobb–Douglas (CD) production function. We expand upon the classical CD production function by incorporating time-varying properties to enable more sophisticated modeling. We utilize a flexible and efficient Bayesian approach-based computational algorithm for statistical inference in the constrained parameter space, where the sum of model elasticities must be less than 1. The proposed model is applied to four real datasets from macroeconomics, as well as various social science issues broadly covered by the CD production function. The real data applications demonstrate the effectiveness of the proposed model in estimating underlying time-varying effects for parameters in the CD production function.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"108 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research explores the use of machine learning to predict alpha in constructing portfolios, leveraging a broad array of environmental, social, and governance (ESG) factors within the S&P 500 index. Existing literature bases analyses on synthetic indicators, this work proposes an analytical deep dive based on a dataset containing the sub-indicators that give rise to the aforementioned synthetic indices. Since such dimensionality of variables requires specific processing, we deemed it necessary to use a machine learning algorithm, allowing us to study, with strong specificity, two types of relationships: the interaction between individual ESG variables and their effect on corporate performance.The results clearly show that ESG factors have a significant relationship with company performance. These findings emphasise the importance of integrating ESG indicators into quantitative investment strategies using Machine Learning methodologies.
{"title":"Can Machine Learning Explain Alpha Generated by ESG Factors?","authors":"Vittorio Carlei, Piera Cascioli, Alessandro Ceccarelli, Donatella Furia","doi":"10.1007/s10614-024-10602-8","DOIUrl":"https://doi.org/10.1007/s10614-024-10602-8","url":null,"abstract":"<p>This research explores the use of machine learning to predict alpha in constructing portfolios, leveraging a broad array of environmental, social, and governance (ESG) factors within the S&P 500 index. Existing literature bases analyses on synthetic indicators, this work proposes an analytical deep dive based on a dataset containing the sub-indicators that give rise to the aforementioned synthetic indices. Since such dimensionality of variables requires specific processing, we deemed it necessary to use a machine learning algorithm, allowing us to study, with strong specificity, two types of relationships: the interaction between individual ESG variables and their effect on corporate performance.The results clearly show that ESG factors have a significant relationship with company performance. These findings emphasise the importance of integrating ESG indicators into quantitative investment strategies using Machine Learning methodologies.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"25 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s10614-024-10606-4
Teddy Lazebnik
Accurately estimating the size of unregistered economies is crucial for informed policymaking and economic analysis. However, many studies seem to overfit partial data as these use simple linear regression models. Recent studies adopted a more advanced approach, using non-linear models obtained using machine learning techniques. In this study, we take a step forward on the road of data-driven models for the unregistered economy activity’s (UEA) size prediction using a novel deep-learning approach. The proposed two-phase deep learning model combines an AutoEncoder for feature representation and a Long Short-Term Memory (LSTM) for time-series prediction. We show it outperforms traditional linear regression models and current state-of-the-art machine learning-based models, offering a more accurate and reliable estimation. Moreover, we show that the proposed model is better in generalizing UEA’s dynamics across countries and timeframes, providing policymakers with a more profound group to design socio-economic policies to tackle UEA.
{"title":"Going a Step Deeper Down the Rabbit Hole: Deep Learning Model to Measure the Size of the Unregistered Economy Activity","authors":"Teddy Lazebnik","doi":"10.1007/s10614-024-10606-4","DOIUrl":"https://doi.org/10.1007/s10614-024-10606-4","url":null,"abstract":"<p>Accurately estimating the size of unregistered economies is crucial for informed policymaking and economic analysis. However, many studies seem to overfit partial data as these use simple linear regression models. Recent studies adopted a more advanced approach, using non-linear models obtained using machine learning techniques. In this study, we take a step forward on the road of data-driven models for the unregistered economy activity’s (UEA) size prediction using a novel deep-learning approach. The proposed two-phase deep learning model combines an AutoEncoder for feature representation and a Long Short-Term Memory (LSTM) for time-series prediction. We show it outperforms traditional linear regression models and current state-of-the-art machine learning-based models, offering a more accurate and reliable estimation. Moreover, we show that the proposed model is better in generalizing UEA’s dynamics across countries and timeframes, providing policymakers with a more profound group to design socio-economic policies to tackle UEA.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"76 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140811969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s10614-024-10601-9
Yulia Kareeva, Artem Sedakov, Mengke Zhen
The paper examines an opinion dynamics game in a social group with two active agents (influencers) based on the Friedkin–Johnsen model. In the game, we assume sequential announcements of influence efforts by the active agents on the opinions of other (passive) agents of the group. We characterize the Stackelberg solutions as proper solution concepts under sequential play. We then analyze the solutions with a number of measures that quantify them in different aspects: (i) the role of the information structure, i.e., open-loop vs. feedback, (ii) the advantage of sequential over simultaneous moves, and (iii) whether being a leader in the game is more cost-effective than being a follower. Finally, we perform numerical simulations for Zachary’s karate club network to understand how the Stackelberg solutions are sensitive to a change in a parameter characterizing the stubbornness of agents to their initial opinions. The results indicate that the information structure has minimal effect; however, the greatest advantage of the open-loop policy could be achieved with a fully conforming society. In such a society, the efforts of influencers become more efficient, reducing the spread of opinions. Additionally, we observe that the follower has an advantage in the game, which forces each influencer to delay their action until the other one acts.
{"title":"Stackelberg Solutions in an Opinion Dynamics Game with Stubborn Agents","authors":"Yulia Kareeva, Artem Sedakov, Mengke Zhen","doi":"10.1007/s10614-024-10601-9","DOIUrl":"https://doi.org/10.1007/s10614-024-10601-9","url":null,"abstract":"<p>The paper examines an opinion dynamics game in a social group with two active agents (influencers) based on the Friedkin–Johnsen model. In the game, we assume sequential announcements of influence efforts by the active agents on the opinions of other (passive) agents of the group. We characterize the Stackelberg solutions as proper solution concepts under sequential play. We then analyze the solutions with a number of measures that quantify them in different aspects: (i) the role of the information structure, i.e., open-loop vs. feedback, (ii) the advantage of sequential over simultaneous moves, and (iii) whether being a leader in the game is more cost-effective than being a follower. Finally, we perform numerical simulations for Zachary’s karate club network to understand how the Stackelberg solutions are sensitive to a change in a parameter characterizing the stubbornness of agents to their initial opinions. The results indicate that the information structure has minimal effect; however, the greatest advantage of the open-loop policy could be achieved with a fully conforming society. In such a society, the efforts of influencers become more efficient, reducing the spread of opinions. Additionally, we observe that the follower has an advantage in the game, which forces each influencer to delay their action until the other one acts.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"32 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140811831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1007/s10614-024-10603-7
Mohammad Aqil Sahil, Meenakshi Kaushal, Q. M. Danish Lohani
Fuzzy Data Envelopment Analysis is a modeling technique that efficiently ranks decision-making units (DMUs) based on imprecise inputs and outputs. The method constructs an efficient frontier line that separates efficient and inefficient DMUs. The goal is to improve the efficiency score of each inefficient DMU by moving them to the efficient frontier. In this study, we introduce a new approach, called the Pythagorean approach, which considers both the input and the output aspects. The approach is applied to the CCR model, and a new version of the BCC model is introduced, known as the Pythagorean approach-based BCC model. To handle the vagueness of the data set, the Pythagorean approach-based BCC model is extended to a fuzzy environment using a new type of fuzzy number called a sine-shaped fuzzy number. Finally, the efficacy of the model is tested in Indian public sector banks.
{"title":"A Novel Pythagorean Approach Based Sine-Shaped Fuzzy Data Envelopment Analysis Model: An Assessment of Indian Public Sector Banks","authors":"Mohammad Aqil Sahil, Meenakshi Kaushal, Q. M. Danish Lohani","doi":"10.1007/s10614-024-10603-7","DOIUrl":"https://doi.org/10.1007/s10614-024-10603-7","url":null,"abstract":"<p>Fuzzy Data Envelopment Analysis is a modeling technique that efficiently ranks decision-making units (DMUs) based on imprecise inputs and outputs. The method constructs an efficient frontier line that separates efficient and inefficient DMUs. The goal is to improve the efficiency score of each inefficient DMU by moving them to the efficient frontier. In this study, we introduce a new approach, called the Pythagorean approach, which considers both the input and the output aspects. The approach is applied to the CCR model, and a new version of the BCC model is introduced, known as the Pythagorean approach-based BCC model. To handle the vagueness of the data set, the Pythagorean approach-based BCC model is extended to a fuzzy environment using a new type of fuzzy number called a sine-shaped fuzzy number. Finally, the efficacy of the model is tested in Indian public sector banks.</p>","PeriodicalId":50647,"journal":{"name":"Computational Economics","volume":"55 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}