Philippe Meier, Raphael Flepp, M. Rüdisser, E. Franck
In this paper, we test the realization effect, i.e., that risk-taking increases after a paper loss, whereas risk-taking decreases after a realized loss, using gambling data from a real casino. During a particular casino visit, losses are likely perceived as paper losses because the chance to offset prior losses remains effective until leaving the casino. However, when casino customers leave the casino, the final account balance is realized. Using individual-level slot machine gambling records, we find that risk-taking after paper losses increases during a visit and that this effect is more pronounced for larger losses. Conversely, risk-taking across multiple visits is not altered if the realized losses are comparatively small, whereas risk-taking is reduced if realized losses are comparatively large.
{"title":"The Effect of Paper Versus Realized Losses on Subsequent Risktaking: Field Evidence From Casino Gambling","authors":"Philippe Meier, Raphael Flepp, M. Rüdisser, E. Franck","doi":"10.2139/ssrn.3515981","DOIUrl":"https://doi.org/10.2139/ssrn.3515981","url":null,"abstract":"In this paper, we test the realization effect, i.e., that risk-taking increases after a paper loss, whereas risk-taking decreases after a realized loss, using gambling data from a real casino. During a particular casino visit, losses are likely perceived as paper losses because the chance to offset prior losses remains effective until leaving the casino. However, when casino customers leave the casino, the final account balance is realized. Using individual-level slot machine gambling records, we find that risk-taking after paper losses increases during a visit and that this effect is more pronounced for larger losses. Conversely, risk-taking across multiple visits is not altered if the realized losses are comparatively small, whereas risk-taking is reduced if realized losses are comparatively large.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandru V. Asimit, K. Cheung, W. F. Chong, Junlei Hu
Abstract In view of the fact that minimum charge and premium budget constraints are natural economic considerations in any risk-transfer between the insurance buyer and seller, this paper revisits the optimal insurance contract design problem in terms of Pareto optimality with imposing these practical constraints. Pareto optimal insurance contracts, with indemnity schedule and premium payment, are solved in the cases when the risk preferences of the buyer and seller are given by Value-at-Risk or Tail Value-at-Risk. The effect of our constraints and the relative bargaining powers of the buyer and seller on the Pareto optimal insurance contracts are highlighted. Numerical experiments are employed to further examine these effects for some given risk preferences.
{"title":"Pareto-Optimal Insurance Contracts With Premium Budget and Minimum Charge Constraints","authors":"Alexandru V. Asimit, K. Cheung, W. F. Chong, Junlei Hu","doi":"10.2139/ssrn.3508838","DOIUrl":"https://doi.org/10.2139/ssrn.3508838","url":null,"abstract":"Abstract In view of the fact that minimum charge and premium budget constraints are natural economic considerations in any risk-transfer between the insurance buyer and seller, this paper revisits the optimal insurance contract design problem in terms of Pareto optimality with imposing these practical constraints. Pareto optimal insurance contracts, with indemnity schedule and premium payment, are solved in the cases when the risk preferences of the buyer and seller are given by Value-at-Risk or Tail Value-at-Risk. The effect of our constraints and the relative bargaining powers of the buyer and seller on the Pareto optimal insurance contracts are highlighted. Numerical experiments are employed to further examine these effects for some given risk preferences.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128444341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assuming risk aversion, the Second degree Stochastic Dominance (SSD) criterion is employed for prospect ranking. However, in some cases the optimal SSD rule cannot distinguish between two alternative prospects when it is obvious that one prospect is preferred by practically all risk averters-a paradoxical result. The optimal Almost SSD (ASSD) rule which aims to avoid such paradoxical results also fails to do so. We suggest a new rule, the General ASSD rule (GASSD) to replace the ASSD rule, which in turn, solves the SSD as well the ASSD existing and potential paradoxes. The GASSD rule, unlike the SSD and ASSD rules, takes into account the magnitude of the mean return difference of the two prospects under consideration. The ASSD and the GASSD rules coincide only when these two means are equal.
{"title":"Paradoxes with Optimal Investment Rules","authors":"H. Levy","doi":"10.2139/ssrn.3504401","DOIUrl":"https://doi.org/10.2139/ssrn.3504401","url":null,"abstract":"Assuming risk aversion, the Second degree Stochastic Dominance (SSD) criterion is employed for prospect ranking. However, in some cases the optimal SSD rule cannot distinguish between two alternative prospects when it is obvious that one prospect is preferred by practically all risk averters-a paradoxical result. The optimal Almost SSD (ASSD) rule which aims to avoid such paradoxical results also fails to do so. We suggest a new rule, the General ASSD rule (GASSD) to replace the ASSD rule, which in turn, solves the SSD as well the ASSD existing and potential paradoxes. The GASSD rule, unlike the SSD and ASSD rules, takes into account the magnitude of the mean return difference of the two prospects under consideration. The ASSD and the GASSD rules coincide only when these two means are equal.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126723786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tyler K. Jensen, Robert R. Johnson, Michael J. McNamara
We examine funding conditions and U.S. insurance company stock returns. Although constrained funding conditions, signaled by restrictive Federal Reserve monetary policy, correspond with increases in the future payouts of fixed‐income securities held by insurance firms and potentially provide value through the liability side of insurer balance sheets, they also decrease the values of securities currently held in insurer portfolios. Prior research finds that restrictive policy has a negative effect on equity returns in general. Our results suggest the negative impacts of constrained funding environments outweigh the potential positives, as insurance company stock returns are significantly lower during periods of constrained funding. This effect varies within a given funding state and also across insurer type. The effect is strongest during the first 3 months of a constrained funding environment and for life and health insurers—insurer types with longer portfolio durations. For property and liability (P&L) insurers, lower stock return performance only exists in the first 3 months of a constrained funding environment. In the subsequent months, P&L insurers actually have higher stock returns during constrained periods, consistent with their typically shorter duration asset portfolios, which are more quickly rolled over into new higher‐yielding securities.
{"title":"Funding Conditions and Insurance Stock Returns: Do Insurance Stocks Really Benefit from Rising Interest Rate Regimes?","authors":"Tyler K. Jensen, Robert R. Johnson, Michael J. McNamara","doi":"10.1111/rmir.12133","DOIUrl":"https://doi.org/10.1111/rmir.12133","url":null,"abstract":"We examine funding conditions and U.S. insurance company stock returns. Although constrained funding conditions, signaled by restrictive Federal Reserve monetary policy, correspond with increases in the future payouts of fixed‐income securities held by insurance firms and potentially provide value through the liability side of insurer balance sheets, they also decrease the values of securities currently held in insurer portfolios. Prior research finds that restrictive policy has a negative effect on equity returns in general. Our results suggest the negative impacts of constrained funding environments outweigh the potential positives, as insurance company stock returns are significantly lower during periods of constrained funding. This effect varies within a given funding state and also across insurer type. The effect is strongest during the first 3 months of a constrained funding environment and for life and health insurers—insurer types with longer portfolio durations. For property and liability (P&L) insurers, lower stock return performance only exists in the first 3 months of a constrained funding environment. In the subsequent months, P&L insurers actually have higher stock returns during constrained periods, consistent with their typically shorter duration asset portfolios, which are more quickly rolled over into new higher‐yielding securities.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114567179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In classical models of markets, the state of nature is revealed regardless of the actions agents take. If instead agents can uncover information they will determine which states can be distinguished and thus which goods are traded. Competitive equilibria can then be inefficient. One source of inefficiency is self-confirmation: price expectations can dissuade agents from discovering the information that would invalidate their expectations. Inefficiency can also occur if agents have diverse priors or fail to be risk-averse, possibilities that normally would be irrelevant for the first welfare theorem. Restoring optimality requires these assumptions and a competitive price rule must hold: the prices anticipated when agents contemplate an information discovery must be proportional to the probabilities of the events that could be revealed. In the most important application of information discovery, experimenting with new technologies, our results show that competitive markets can lead firms to experiment efficiently.
{"title":"Competitive Information Discovery","authors":"Michael Mandler","doi":"10.2139/ssrn.3035373","DOIUrl":"https://doi.org/10.2139/ssrn.3035373","url":null,"abstract":"In classical models of markets, the state of nature is revealed regardless of the actions agents take. If instead agents can uncover information they will determine which states can be distinguished and thus which goods are traded. Competitive equilibria can then be inefficient. One source of inefficiency is self-confirmation: price expectations can dissuade agents from discovering the information that would invalidate their expectations. Inefficiency can also occur if agents have diverse priors or fail to be risk-averse, possibilities that normally would be irrelevant for the first welfare theorem. Restoring optimality requires these assumptions and a competitive price rule must hold: the prices anticipated when agents contemplate an information discovery must be proportional to the probabilities of the events that could be revealed. In the most important application of information discovery, experimenting with new technologies, our results show that competitive markets can lead firms to experiment efficiently.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"126 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129754764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The "Supplement to Imprecise Probabilities-Historical appendix: Theories of imprecise belief " presents a severely inaccurate representation of Keynes’s contributions to imprecise probability. It also completely ignores the seminal, path breaking contributions to imprecise probability made by George Boole in his 1854 The Laws of Thought in chapters 16-21. The error is compounded with regard to Keynes because Keynes’s entire system of logical probability in the A Treatise on Probability is built on Boole’s exposition of lower (greatest lower bound)-upper (least upper bound) probabilities that Keynes used in Parts II and III of the A Treatise on Probability to develop his method of inexact measurement and approximation using interval valued probability.
It is in chapter 15 of the TP on pp. 161-163, as well as in chapter 16, 17, 20 and 22 of the TP, in a very detailed, mathematical analysis of his earlier, brief, graphical exposition in chapter III of the TP, that an analysis of imprecise probability was provided by Keynes. Keynes, through his adaptation of Boole’s original approach, that Keynes had adapted and improved upon in the same manner that Wilbraham had improved the Boolean approach in 1854, allowed for him to provide an explicit, imprecise approach to probability showing the importance of non additivity in 1921.
In 1986 and 1996,Theodore Hailparin showed decisively that the Boole-Keynes approach involved the use of an early version of linear programming techniques, using an initial, basic, feasible solution, involving primal and dual maximization and minimization problems with constraints that were both equalities (linear) and inequalities (nonlinear) [Keynes’s inequations].
Keynes had already solved the mystery of the diagram on page 39 of chapter III in 1921(Actually,the same analysis is provided by Keynes in the 1908 Cambridge Fellowship Dissertation. This Fellowship Dissertation would have earned Keynes a Ph.D in Applied Mathematics at any University in the world at that time if he had been interested in obtaining a Ph.D,which he was not) in chapter 15 with additional problems solved in chapters 16, 17, 20, and 22 of the TP.
Keynes’s analysis on pp.161-163 is straightforward if the reader of the TP has the requisite mathematical training and knowledge.It is highly likely that no economist ,who has written on Keynes’s TP,has such knowledge.
The presentation of Keynes’s theory in the appendix to the entry on’ Imprecise Probability’ is completely misleading. The failure to mention Boole’s path breaking work is a lacuna that can only be remedied by an extensive revision. Hopefully,such a revision will finally set right the historical record regarding the original and detailed mathematical and logical contributions made by Keynes in 1907,1908 and 1921, and by Boole in 1854.
“对不精确概率的补充——历史附录:不精确信念的理论”对凯恩斯对不精确概率的贡献提出了严重不准确的表述。它也完全忽略了乔治·布尔(George Boole)在1854年出版的《思想法则》(the Laws of Thought)第16-21章中对不精确概率的开创性贡献。对于凯恩斯来说,这个错误是复杂的,因为凯恩斯在《概率论》中的整个逻辑概率系统是建立在布尔对下界(最大下界)-上界(最小上界)概率的阐述之上的,凯恩斯在《概率论》的第二部分和第三部分中使用了这种概率来发展他的不精确测量和使用区间值概率的近似方法。在第15章161-163页,以及第16,17,20和22章,在一个非常详细的数学分析中,在第3章,凯恩斯提供了一个不精确概率的分析。凯恩斯,通过他对布尔原始方法的改编,凯恩斯改编和改进了布尔方法的方式与1854年威尔伯拉罕改进布尔方法的方式相同,这使得他在1921年提供了一个明确的,不精确的概率方法,显示了不可加性的重要性。1986年和1996年,Theodore Hailparin果断地表明,布尔-凯恩斯方法涉及使用早期版本的线性规划技术,使用初始的、基本的、可行的解决方案,涉及具有等式(线性)和不等式(非线性)约束的原始和对偶最大化和最小化问题[凯恩斯不等式]。在1921年,凯恩斯已经解决了第三章第39页的图表之谜(实际上,凯恩斯在1908年的剑桥奖学金论文中提供了同样的分析。如果凯恩斯对获得博士学位感兴趣的话,这篇奖学金论文可以让他在当时世界上任何一所大学获得应用数学博士学位(他没有),在第15章中,在第16、17、20和22章中解决了额外的问题。如果读者具备必要的数学训练和知识,凯恩斯对第161-163页的分析就会一目了然。很有可能,没有一个写过凯恩斯理论的经济学家有这样的知识。凯恩斯的理论在“不精确概率”条目的附录中是完全误导的。没有提到布尔的开创性工作是一个空白,只能通过广泛的修订来弥补。有希望的是,这样的修订将最终纠正历史记录,关于凯恩斯在1907年、1908年和1921年以及布尔在1854年所做的原始而详细的数学和逻辑贡献。
{"title":"On the Need for an Extensive Revision of the ‘Imprecise Probabilities’ Entry regarding Boole and Keynes in The Stanford Encyclopedia of Philosophy (Spring, 2019 Edition) in the ‘ Supplement to Imprecise Probabilities-Historical appendix: Theories of Imprecise Belief.’","authors":"M. E. Brady","doi":"10.2139/ssrn.3495817","DOIUrl":"https://doi.org/10.2139/ssrn.3495817","url":null,"abstract":"The \"Supplement to Imprecise Probabilities-Historical appendix: Theories of imprecise belief \" presents a severely inaccurate representation of Keynes’s contributions to imprecise probability. It also completely ignores the seminal, path breaking contributions to imprecise probability made by George Boole in his 1854 The Laws of Thought in chapters 16-21. The error is compounded with regard to Keynes because Keynes’s entire system of logical probability in the A Treatise on Probability is built on Boole’s exposition of lower (greatest lower bound)-upper (least upper bound) probabilities that Keynes used in Parts II and III of the A Treatise on Probability to develop his method of inexact measurement and approximation using interval valued probability.<br><br>It is in chapter 15 of the TP on pp. 161-163, as well as in chapter 16, 17, 20 and 22 of the TP, in a very detailed, mathematical analysis of his earlier, brief, graphical exposition in chapter III of the TP, that an analysis of imprecise probability was provided by Keynes. Keynes, through his adaptation of Boole’s original approach, that Keynes had adapted and improved upon in the same manner that Wilbraham had improved the Boolean approach in 1854, allowed for him to provide an explicit, imprecise approach to probability showing the importance of non additivity in 1921.<br><br>In 1986 and 1996,Theodore Hailparin showed decisively that the Boole-Keynes approach involved the use of an early version of linear programming techniques, using an initial, basic, feasible solution, involving primal and dual maximization and minimization problems with constraints that were both equalities (linear) and inequalities (nonlinear) [Keynes’s inequations].<br><br>Keynes had already solved the mystery of the diagram on page 39 of chapter III in 1921(Actually,the same analysis is provided by Keynes in the 1908 Cambridge Fellowship Dissertation. This Fellowship Dissertation would have earned Keynes a Ph.D in Applied Mathematics at any University in the world at that time if he had been interested in obtaining a Ph.D,which he was not) in chapter 15 with additional problems solved in chapters 16, 17, 20, and 22 of the TP.<br><br>Keynes’s analysis on pp.161-163 is straightforward if the reader of the TP has the requisite mathematical training and knowledge.It is highly likely that no economist ,who has written on Keynes’s TP,has such knowledge.<br><br>The presentation of Keynes’s theory in the appendix to the entry on’ Imprecise Probability’ is completely misleading. The failure to mention Boole’s path breaking work is a lacuna that can only be remedied by an extensive revision. Hopefully,such a revision will finally set right the historical record regarding the original and detailed mathematical and logical contributions made by Keynes in 1907,1908 and 1921, and by Boole in 1854. <br><br>","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keynes’s initial, introductory presentation of his inexact measurement, approximation approach to interval valued probability occurred on pages 38-40 of chapter III of the A Treatise on Probability. Keynes used a simple diagram on page 38 to illustrate the non linear and non additive nature of interval valued probability. He also informed the reader on pages 37-38 that the introductory analysis on pp.38-40 would be a brief analysis whereas the analysis in Part II would be a detailed analysis. Post Keynesian, Institutionalist and heterodix economists have completely ignored Keynes’s warning and completely misinterpreted and misrepresented Keynes’s introductory analysis on interval valued probability as an ordinal theory of probability.
The result of this misinterpretation and misrepresentation of Keynes’s decision theory can be seen in the response of public officials, who have been exposed to the misinterpretations and misrepresentations of heterodox economists, when they are considering how to base their country’s monetary and fiscal policy proposals on an analysis that is supposed to incorporate Keynes’s crucial positions on uncertainty and expectations.
{"title":"On the Erroneous Heterodox and Post Keynesian Belief That Keynes’s Interval Valued Decision Theory in the A Treatise on Probability (1921) Was an Ordinal Theory of Probability","authors":"M. E. Brady","doi":"10.2139/ssrn.3492506","DOIUrl":"https://doi.org/10.2139/ssrn.3492506","url":null,"abstract":"Keynes’s initial, introductory presentation of his inexact measurement, approximation approach to interval valued probability occurred on pages 38-40 of chapter III of the A Treatise on Probability. Keynes used a simple diagram on page 38 to illustrate the non linear and non additive nature of interval valued probability. He also informed the reader on pages 37-38 that the introductory analysis on pp.38-40 would be a brief analysis whereas the analysis in Part II would be a detailed analysis. Post Keynesian, Institutionalist and heterodix economists have completely ignored Keynes’s warning and completely misinterpreted and misrepresented Keynes’s introductory analysis on interval valued probability as an ordinal theory of probability.<br><br>The result of this misinterpretation and misrepresentation of Keynes’s decision theory can be seen in the response of public officials, who have been exposed to the misinterpretations and misrepresentations of heterodox economists, when they are considering how to base their country’s monetary and fiscal policy proposals on an analysis that is supposed to incorporate Keynes’s crucial positions on uncertainty and expectations.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133236390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A multi-agent, moral-hazard model of a bank operating under deposit insurance and limited liability is used to analyze the connection between compensation of bank employees (below CEO) and bank risk. Limited liability with deposit insurance is a force that distorts effort down. However, the need to increase compensation to risk-averse employees in order to compensate them for extra bank risk is a force that reduces this effect. Optimal contracts use relative performance and are implementable as a wage with bonuses tied to individual and firm performance. The connection between pay for performance and bank risk depends on correlation of returns. If employee returns are uncorrelated, the form of pay is irrelevant for risk. If returns are perfectly correlated, a low wage can indicate risk. Connections to compensation regulation and characteristics of organizations are discussed.
{"title":"Banker Compensation, Relative Performance, and Bank Risk","authors":"Arantxa Jarque, E. Prescott","doi":"10.2139/ssrn.3485742","DOIUrl":"https://doi.org/10.2139/ssrn.3485742","url":null,"abstract":"A multi-agent, moral-hazard model of a bank operating under deposit insurance and limited liability is used to analyze the connection between compensation of bank employees (below CEO) and bank risk. Limited liability with deposit insurance is a force that distorts effort down. However, the need to increase compensation to risk-averse employees in order to compensate them for extra bank risk is a force that reduces this effect. Optimal contracts use relative performance and are implementable as a wage with bonuses tied to individual and firm performance. The connection between pay for performance and bank risk depends on correlation of returns. If employee returns are uncorrelated, the form of pay is irrelevant for risk. If returns are perfectly correlated, a low wage can indicate risk. Connections to compensation regulation and characteristics of organizations are discussed.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116347079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To minimize the cost of making decisions, an agent should use criteria to sort alternatives and each criterion should sort coarsely. To decide on a movie, for example, an agent could use one criterion that orders movies by genre categories, another by director categories, and so on, with a small number of categories in each case. The agent then needs to aggregate the criterion orderings, possibly by a weighted vote, to arrive at choices. As criteria become coarser (each criterion has fewer categories) decision-making costs fall, even though an agent must then use more criteria. The most efficient option is consequently to select the binary criteria with two categories each. This result holds even when the marginal cost of using additional categories diminishes to 0. The extensive use of coarse criteria in practice may therefore be a result of optimization rather than cognitive limitations. Binary criteria also generate choice functions that maximize rational preferences: decision-making efficiency implies rational choice.
{"title":"Coarse, Efficient Decision-Making","authors":"Michael Mandler","doi":"10.2139/ssrn.2494600","DOIUrl":"https://doi.org/10.2139/ssrn.2494600","url":null,"abstract":"\u0000 To minimize the cost of making decisions, an agent should use criteria to sort alternatives and each criterion should sort coarsely. To decide on a movie, for example, an agent could use one criterion that orders movies by genre categories, another by director categories, and so on, with a small number of categories in each case. The agent then needs to aggregate the criterion orderings, possibly by a weighted vote, to arrive at choices. As criteria become coarser (each criterion has fewer categories) decision-making costs fall, even though an agent must then use more criteria. The most efficient option is consequently to select the binary criteria with two categories each. This result holds even when the marginal cost of using additional categories diminishes to 0. The extensive use of coarse criteria in practice may therefore be a result of optimization rather than cognitive limitations. Binary criteria also generate choice functions that maximize rational preferences: decision-making efficiency implies rational choice.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125386874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most school choice and other matching mechanisms are based on deferred acceptance (DA) for its incentive properties. However, non-strategyproof mechanisms can dominate DA in welfare because manipulation in preference rankings can reflect the intensities of underlying cardinal preferences. In this work, we use the parallel mechanism of Chen and Kesten, which interpolates between Boston mechanism and DA, to quantify this tradeoff. While it is previously known that mechanisms that are closer to Boston mechanism are more manipulable, we show that they are also more efficient in student welfare if school priorities are weak. Our theoretical results show the inefficiency-manipulability tradeoff in the worst case, while our simulation results show the same tradeoff in the typical case.
{"title":"Inefficiency-Manipulability Tradeoff in the Parallel Mechanism","authors":"Jerry Anunrojwong","doi":"10.2139/ssrn.3387000","DOIUrl":"https://doi.org/10.2139/ssrn.3387000","url":null,"abstract":"Most school choice and other matching mechanisms are based on deferred acceptance (DA) for its incentive properties. However, non-strategyproof mechanisms can dominate DA in welfare because manipulation in preference rankings can reflect the intensities of underlying cardinal preferences. In this work, we use the parallel mechanism of Chen and Kesten, which interpolates between Boston mechanism and DA, to quantify this tradeoff. While it is previously known that mechanisms that are closer to Boston mechanism are more manipulable, we show that they are also more efficient in student welfare if school priorities are weak. Our theoretical results show the inefficiency-manipulability tradeoff in the worst case, while our simulation results show the same tradeoff in the typical case.","PeriodicalId":281936,"journal":{"name":"ERN: Other Microeconomics: Decision-Making under Risk & Uncertainty (Topic)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132215963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}