In number theory, a perfect number is a positive integer that is equal to the sum of its proper positive divisors, excluding the number itself. In other words, a perfect number is a number that is half the sum of all of its positive divisors (including itself) i.e. σ1(n) = 2n. To explain in practical terms, we elaborate first few Perfect Numbers. It may be noted that ‘Perfect Numbers’ are sparse are thinly dispersed. Starting from 3rd Century BC, mathematicians are working on Perfect Numbers. Till April 2018, i.e. during last 2300 years active research, researchers could recognize only 50 perfect numbers. There are 2 perfect numbers in first 100 and 4 in first million. Absolute distance between two perfect numbers increase exponentially as you go higher to the next perfect number . One can find at least one perfect number till 4 digit numbers, and then it becomes a real rarity. Subsequent perfect numbers appears at 8, 10, 12 and 19 digits. 15th perfect number has 770 digits while 16th have 1327 digits. 25th perfect number has 13066 digits. 50th perfect number has 46,498,850 digits. We found that perfect number is always predictable by using formula 1 (p) 0 (p-1) where 1 and 0 are binary digits and p = count of binary digit. We also argue that if any binary number 1...(p) 0 (p-1) if perfect number, will always an even number. We also observed that first known 50 perfect number ends with 6 or 28 as last one or two digits. Therefore a perfect number is always predictable and even.
{"title":"Mystery of ‘Perfect Numbers’ Resolved – Perfect Number Is Always Even and Predictable","authors":"V. Sapovadia, S. Patel","doi":"10.2139/ssrn.3210227","DOIUrl":"https://doi.org/10.2139/ssrn.3210227","url":null,"abstract":"In number theory, a perfect number is a positive integer that is equal to the sum of its proper positive divisors, excluding the number itself. In other words, a perfect number is a number that is half the sum of all of its positive divisors (including itself) i.e. σ1(n) = 2n. To explain in practical terms, we elaborate first few Perfect Numbers. It may be noted that ‘Perfect Numbers’ are sparse are thinly dispersed. Starting from 3rd Century BC, mathematicians are working on Perfect Numbers. Till April 2018, i.e. during last 2300 years active research, researchers could recognize only 50 perfect numbers. There are 2 perfect numbers in first 100 and 4 in first million. Absolute distance between two perfect numbers increase exponentially as you go higher to the next perfect number . One can find at least one perfect number till 4 digit numbers, and then it becomes a real rarity. Subsequent perfect numbers appears at 8, 10, 12 and 19 digits. 15th perfect number has 770 digits while 16th have 1327 digits. 25th perfect number has 13066 digits. 50th perfect number has 46,498,850 digits. We found that perfect number is always predictable by using formula 1 (p) 0 (p-1) where 1 and 0 are binary digits and p = count of binary digit. We also argue that if any binary number 1...(p) 0 (p-1) if perfect number, will always an even number. We also observed that first known 50 perfect number ends with 6 or 28 as last one or two digits. Therefore a perfect number is always predictable and even.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134488790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the framework of preference rankings, the interest can lie in clustering individuals or items in order to reduce the complexity of the preference space for an easier interpretation of collected data. The last years have seen a remarkable owering of works about the use of decision tree for clustering preference vectors. As a matter of fact, decision trees are useful and intuitive, but they are very unstable: small perturbations bring big changes. This is the reason why it could be necessary to use more stable procedures in order to clustering ranking data. In this work, a Projection Clustering Unfolding (PCU) algorithm for preference data will be proposed in order to extract useful information in a low-dimensional subspace by starting from an high but mostly empty dimensional space. Comparison between unfolding configurations and PCU solutions will be carried out through Procrustes analysis.
{"title":"Projection Clustering Unfolding: A New Algorithm for Clustering Individuals or Items In A Preference Matrix","authors":"M. Sciandra, Antonio D’Ambrosio, A. Plaia","doi":"10.2139/ssrn.3209215","DOIUrl":"https://doi.org/10.2139/ssrn.3209215","url":null,"abstract":"In the framework of preference rankings, the interest can lie in clustering individuals or items in order to reduce the complexity of the preference space for an easier interpretation of collected data. The last years have seen a remarkable owering of works about the use of decision tree for clustering preference vectors. As a matter of fact, decision trees are useful and intuitive, but they are very unstable: small perturbations bring big changes. This is the reason why it could be necessary to use more stable procedures in order to clustering ranking data. In this work, a Projection Clustering Unfolding (PCU) algorithm for preference data will be proposed in order to extract useful information in a low-dimensional subspace by starting from an high but mostly empty dimensional space. Comparison between unfolding configurations and PCU solutions will be carried out through Procrustes analysis.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131088836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this paper is to present the elemental equations we can use to calibrate (through the maximum log-likelihood method) and to simulate under a risk-neutral framework (through the Monte Carlo simulation method) the stochastic process known as the trending Ornstein-Uhlenbeck process.
{"title":"The Trending Ornstein-Uhlenbeck Process: A Technical Note","authors":"Carlos Mejía, Carlos Andres Zapata Quimbayo","doi":"10.2139/ssrn.3263789","DOIUrl":"https://doi.org/10.2139/ssrn.3263789","url":null,"abstract":"The aim of this paper is to present the elemental equations we can use to calibrate (through the maximum log-likelihood method) and to simulate under a risk-neutral framework (through the Monte Carlo simulation method) the stochastic process known as the trending Ornstein-Uhlenbeck process.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132004252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main method for numerical solutions to PDEs in finance is the Finite Difference method (FDM). We show how an alternative method, the Finite Element method (FEM) can be used instead. The main strength of FEM is arguably its flexibility given by the grid construction which is no longer a set of isolated points but a grid of functions. This compared to FDM, in particular, means that no interpolation is needed as the value of the contingent claim is given everywhere in the space domain by a local function. The introductory exposition is dedicated to a general ODE and then moves to a Galerkin FEM formulation applied to a Black-Scholes PDE.
{"title":"Galerkin FEM for Black-Scholes PDE","authors":"Marek Kolman","doi":"10.2139/ssrn.3081892","DOIUrl":"https://doi.org/10.2139/ssrn.3081892","url":null,"abstract":"The main method for numerical solutions to PDEs in finance is the Finite Difference method (FDM). We show how an alternative method, the Finite Element method (FEM) can be used instead. The main strength of FEM is arguably its flexibility given by the grid construction which is no longer a set of isolated points but a grid of functions. This compared to FDM, in particular, means that no interpolation is needed as the value of the contingent claim is given everywhere in the space domain by a local function. The introductory exposition is dedicated to a general ODE and then moves to a Galerkin FEM formulation applied to a Black-Scholes PDE.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133264347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a generalization of the rational expectations framework to allow for multiplicative sunspot shocks and temporarily unstable paths. Then, we provide an econometric strategy to estimate this generalized model on the data. Our approach yields drifting parameters and stochastic volatility. The methodology allows the data to choose between different possible alternatives: determinacy, indeterminacy and temporary instability. We apply our methodology to US inflation dynamics in the ‘70s through the lens of a simple New Keynesian model. When temporarily unstable paths are allowed, the data unambiguously select them to explain the stagflation period in the ‘70s.
{"title":"Walk on the Wild Side: Multiplicative Sunspots and Temporarily Unstable Paths","authors":"G. Ascari, Paolo Bonomolo, H. Lopes","doi":"10.2139/ssrn.3191806","DOIUrl":"https://doi.org/10.2139/ssrn.3191806","url":null,"abstract":"We propose a generalization of the rational expectations framework to allow for multiplicative sunspot shocks and temporarily unstable paths. Then, we provide an econometric strategy to estimate this generalized model on the data. Our approach yields drifting parameters and stochastic volatility. The methodology allows the data to choose between different possible alternatives: determinacy, indeterminacy and temporary instability. We apply our methodology to US inflation dynamics in the ‘70s through the lens of a simple New Keynesian model. When temporarily unstable paths are allowed, the data unambiguously select them to explain the stagflation period in the ‘70s.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123480745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The solutions to robust optimization problems are sometimes too conservative because of the focus on worst-case performance. For the least-squares (LS) problem, we describe a way to overcome this by combining the classical formulation with its robust version. We do this by constructing a sequence of problems that are parameterized in terms of the well-estimated aspects of the data. One end of this sequence is the Classical LS, and the other end is a variant of Robust LS that we construct for this purpose. By choosing the right point in the sequence, we are selectively robust only to the poorly estimated aspects of the data. However, we show that better estimation does not imply better prediction. We then transform the problem to align the estimation and prediction objectives, calling it objective matching. This transformation improves prediction while provably retaining the problem structure. Objective matching allows our method (called Unified Least Squares or ULS) to consistently match or outperform other state-of-the-art techniques, including both ridge and LASSO regression, on simulations and real-world data sets.
{"title":"Unified Classical and Robust Optimization for Least Squares","authors":"Long Zhao, Deepayan Chakrabarti, K. Muthuraman","doi":"10.2139/ssrn.3182422","DOIUrl":"https://doi.org/10.2139/ssrn.3182422","url":null,"abstract":"The solutions to robust optimization problems are sometimes too conservative because of the focus on worst-case performance. For the least-squares (LS) problem, we describe a way to overcome this by combining the classical formulation with its robust version. We do this by constructing a sequence of problems that are parameterized in terms of the well-estimated aspects of the data. One end of this sequence is the Classical LS, and the other end is a variant of Robust LS that we construct for this purpose. By choosing the right point in the sequence, we are selectively robust only to the poorly estimated aspects of the data. However, we show that better estimation does not imply better prediction. We then transform the problem to align the estimation and prediction objectives, calling it objective matching. This transformation improves prediction while provably retaining the problem structure. Objective matching allows our method (called Unified Least Squares or ULS) to consistently match or outperform other state-of-the-art techniques, including both ridge and LASSO regression, on simulations and real-world data sets.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122487551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renewable portfolio standards (RPS) are commonly promoted as a policy tool to reduce emissions associated with fossil generation, while also stimulating development of local renewable resource endowments. We develop a general equilibrium model of an RPS policy that captures key features such as a fixed factor renewable endowment, substitution across sectors of the economy, and endogenous price responses. We analytically decompose the effects of an RPS into a) a substitution effect, b) an output-tax effect, and c) an output effect. We show that an increase in the RPS can either deliver large resource booms or large emissions savings but not both. Our framework can translate different renewable resource endowments and pre-existing standards across states into economic and environmental impacts to inform current renewable energy and climate policies.
{"title":"Emissions Reductions or Green Booms? General Equilibrium Effects of a Renewable Portfolio Standard","authors":"A. Bento, Teevrat Garg, D. Kaffine","doi":"10.2139/ssrn.3176833","DOIUrl":"https://doi.org/10.2139/ssrn.3176833","url":null,"abstract":"Renewable portfolio standards (RPS) are commonly promoted as a policy tool to reduce emissions associated with fossil generation, while also stimulating development of local renewable resource endowments. We develop a general equilibrium model of an RPS policy that captures key features such as a fixed factor renewable endowment, substitution across sectors of the economy, and endogenous price responses. We analytically decompose the effects of an RPS into a) a substitution effect, b) an output-tax effect, and c) an output effect. We show that an increase in the RPS can either deliver large resource booms or large emissions savings but not both. Our framework can translate different renewable resource endowments and pre-existing standards across states into economic and environmental impacts to inform current renewable energy and climate policies.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124712997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personalized pricing analytics is becoming an essential tool in retailing. Upon observing the profile of each arriving customer, the firm needs to set a price accordingly based on the observed personalized information, such as income, education background, and past purchasing history, to extract more revenue. For new entrants of the business, the lack of historical data may severely limit the power and profitability of personalized pricing. We recommend a pricing policy to firms that simultaneously learns the preference of customers based on the profiles and maximizes the profit. The pricing policy doesn't depend on any prior assumptions on how the personalized information affects consumers' preferences. Instead, it adaptively clusters customers based on their profiles and preferences, offering similar prices for customers who belong to the same cluster trading off granularity and accuracy. We prove that the regret of the proposed policy cannot be improved by any other policy.
{"title":"Nonparametric Pricing Analytics with Customer Covariates","authors":"Ningyuan Chen, G. Gallego","doi":"10.2139/ssrn.3172697","DOIUrl":"https://doi.org/10.2139/ssrn.3172697","url":null,"abstract":"Personalized pricing analytics is becoming an essential tool in retailing. Upon observing the profile of each arriving customer, the firm needs to set a price accordingly based on the observed personalized information, such as income, education background, and past purchasing history, to extract more revenue. For new entrants of the business, the lack of historical data may severely limit the power and profitability of personalized pricing. We recommend a pricing policy to firms that simultaneously learns the preference of customers based on the profiles and maximizes the profit. The pricing policy doesn't depend on any prior assumptions on how the personalized information affects consumers' preferences. Instead, it adaptively clusters customers based on their profiles and preferences, offering similar prices for customers who belong to the same cluster trading off granularity and accuracy. We prove that the regret of the proposed policy cannot be improved by any other policy.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122763797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we discuss the models of continuous dynamics on the 2-simplex that arise when different qualitative restrictions are imposed on the (continuous) functions that generate the dynamics on the 2-simplex. We consider three types of qualitative restrictions: inequality (or set-theoretical) conditions, monotonicity/curvature (or differential-geometrical) conditions, and topological conditions (referring to (transversal) non-(self-)intersection of trajectories). We discuss the implications of these restrictions for transitional and limit dynamics on the 2-simplex and the wide range of potential and existing applications of the resulting system-theoretical models in economics and, in particular, in economic growth and development theory.
{"title":"Models of Continuous Dynamics on the 2-Simplex and Applications in Economics","authors":"Denis Stijepic","doi":"10.2139/ssrn.3167236","DOIUrl":"https://doi.org/10.2139/ssrn.3167236","url":null,"abstract":"In this paper, we discuss the models of continuous dynamics on the 2-simplex that arise when different qualitative restrictions are imposed on the (continuous) functions that generate the dynamics on the 2-simplex. We consider three types of qualitative restrictions: inequality (or set-theoretical) conditions, monotonicity/curvature (or differential-geometrical) conditions, and topological conditions (referring to (transversal) non-(self-)intersection of trajectories). We discuss the implications of these restrictions for transitional and limit dynamics on the 2-simplex and the wide range of potential and existing applications of the resulting system-theoretical models in economics and, in particular, in economic growth and development theory.","PeriodicalId":299310,"journal":{"name":"Econometrics: Mathematical Methods & Programming eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130672831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}