首页 > 最新文献

PSN: Econometrics最新文献

英文 中文
Robust Inference for Moment Condition Models without Rational Expectations 无理性期望矩条件模型的鲁棒推理
Pub Date : 2021-10-18 DOI: 10.2139/ssrn.3945856
Xiaohong Chen, L. Hansen, Peter G. Hansen
Applied researchers using structural models under rational expectations (RE) often confront empirical evidence of misspecification. In this paper we consider a generic dynamic model that is posed as a vector of unconditional moment restrictions. We suppose that the model is globally misspecified under RE, and thus empirically in a way that is not econometrically subtle. We relax the RE restriction by allowing subjective beliefs to differ from the data-generating probability (DGP) model while still maintaining that the moment conditions are satisfied under the subjective beliefs of economic agents. We use statistical measures of divergence relative to RE to bound the set of subjective probabilities. This form of misspecification alters econometric identification and inferences in a substantial way, leading us to construct robust confidence sets for various set identified functionals.
运用理性预期下的结构模型的应用研究人员经常面临规范错误的经验证据。在本文中,我们考虑了一个一般的动态模型,它被设定为一个无条件矩约束的向量。我们假设,在可再生能源下,模型在全球范围内是错误的,因此在经验上,在计量经济学上是不微妙的。我们通过允许主观信念不同于数据生成概率(DGP)模型来放宽RE限制,同时仍然保持经济主体主观信念下的力矩条件是满足的。我们使用相对于RE的散度的统计度量来约束主观概率集。这种形式的错误规范在很大程度上改变了计量经济学的识别和推断,导致我们为各种集识别函数构建健壮的置信集。
{"title":"Robust Inference for Moment Condition Models without Rational Expectations","authors":"Xiaohong Chen, L. Hansen, Peter G. Hansen","doi":"10.2139/ssrn.3945856","DOIUrl":"https://doi.org/10.2139/ssrn.3945856","url":null,"abstract":"Applied researchers using structural models under rational expectations (RE) often confront empirical evidence of misspecification. In this paper we consider a generic dynamic model that is posed as a vector of unconditional moment restrictions. We suppose that the model is globally misspecified under RE, and thus empirically in a way that is not econometrically subtle. We relax the RE restriction by allowing subjective beliefs to differ from the data-generating probability (DGP) model while still maintaining that the moment conditions are satisfied under the subjective beliefs of economic agents. We use statistical measures of divergence relative to RE to bound the set of subjective probabilities. This form of misspecification alters econometric identification and inferences in a substantial way, leading us to construct robust confidence sets for various set identified functionals.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126054481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Augmented cointegrating linear models with possibly strongly correlated stationary and nonstationary regressors regressors 具有可能强相关的平稳和非平稳回归量的增广协整线性模型
Pub Date : 2021-09-23 DOI: 10.2139/ssrn.3943779
Zhen Peng, Chaohua Dong
Since an economic or financial variable may be affected by both stationary and nonstationary variables, this paper proposes a class of augmented cointegrating linear (ACL) models that accommodate these time series of different types. Moreover, the variables are allowed to be strongly correlated in the sense depicted in the paper. The asymptotic limit theory of estimator proposed is established via jointly convergence of the sample variance and covariance that circumvents the existent drawback in the most nonstationary time series literature; also a self-normalized central limit theorem is given to facilitate statistical inference. Monte Carlo simulations confirm the theoretical results. Finally, ACL regression model is applied to the GDP time series in US, for which we show the proposed model is more accurate and competent than some potential models.
由于经济或金融变量可能同时受到平稳和非平稳变量的影响,本文提出了一类适应这些不同类型时间序列的增广协整线性(ACL)模型。此外,允许变量在本文所描述的意义上是强相关的。通过样本方差和协方差的联合收敛建立了估计量的渐近极限理论,避免了大多数非平稳时间序列文献中存在的缺陷;为了便于统计推断,给出了一个自归一化中心极限定理。蒙特卡罗模拟证实了理论结果。最后,将ACL回归模型应用于美国GDP时间序列,结果表明该模型比一些潜在模型更准确、更胜任。
{"title":"Augmented cointegrating linear models with possibly strongly correlated stationary and nonstationary regressors regressors","authors":"Zhen Peng, Chaohua Dong","doi":"10.2139/ssrn.3943779","DOIUrl":"https://doi.org/10.2139/ssrn.3943779","url":null,"abstract":"Since an economic or financial variable may be affected by both stationary and nonstationary variables, this paper proposes a class of augmented cointegrating linear (ACL) models that accommodate these time series of different types. Moreover, the variables are allowed to be strongly correlated in the sense depicted in the paper. The asymptotic limit theory of estimator proposed is established via jointly convergence of the sample variance and covariance that circumvents the existent drawback in the most nonstationary time series literature; also a self-normalized central limit theorem is given to facilitate statistical inference. Monte Carlo simulations confirm the theoretical results. Finally, ACL regression model is applied to the GDP time series in US, for which we show the proposed model is more accurate and competent than some potential models.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115270490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structured Additive Regression and Tree Boosting 结构加性回归与树提升
Pub Date : 2021-09-17 DOI: 10.2139/ssrn.3924412
Michael Mayer, Steven C. Bourassa, Martin Hoesli, D. Scognamiglio
Structured additive regression (STAR) models are a rich class of regression models that include the generalized linear model (GLM) and the generalized additive model (GAM). STAR models can be fitted by Bayesian approaches, component-wise gradient boosting, penalized least-squares, and deep learning. Using feature interaction constraints, we show that such models can be implemented also by the gradient boosting powerhouses XGBoost and LightGBM, thereby benefiting from their excellent predictive capabilities. Furthermore, we show how STAR models can be used for supervised dimension reduction and explain under what circumstances covariate effects of such models can be described in a transparent way. We illustrate the methodology with case studies pertaining to house price modeling, with very encouraging results regarding both interpretability and predictive performance.
结构加性回归(STAR)模型是一类丰富的回归模型,包括广义线性模型(GLM)和广义加性模型(GAM)。STAR模型可以通过贝叶斯方法、组件梯度增强、惩罚最小二乘和深度学习来拟合。利用特征交互约束,我们证明这种模型也可以通过梯度增强工具XGBoost和LightGBM实现,从而受益于它们出色的预测能力。此外,我们展示了如何使用STAR模型进行监督降维,并解释了在什么情况下这些模型的协变量效应可以以透明的方式描述。我们通过与房价建模相关的案例研究来说明该方法,在可解释性和预测性能方面都取得了非常令人鼓舞的结果。
{"title":"Structured Additive Regression and Tree Boosting","authors":"Michael Mayer, Steven C. Bourassa, Martin Hoesli, D. Scognamiglio","doi":"10.2139/ssrn.3924412","DOIUrl":"https://doi.org/10.2139/ssrn.3924412","url":null,"abstract":"Structured additive regression (STAR) models are a rich class of regression models that include the generalized linear model (GLM) and the generalized additive model (GAM). STAR models can be fitted by Bayesian approaches, component-wise gradient boosting, penalized least-squares, and deep learning. Using feature interaction constraints, we show that such models can be implemented also by the gradient boosting powerhouses XGBoost and LightGBM, thereby benefiting from their excellent predictive capabilities. Furthermore, we show how STAR models can be used for supervised dimension reduction and explain under what circumstances covariate effects of such models can be described in a transparent way. We illustrate the methodology with case studies pertaining to house price modeling, with very encouraging results regarding both interpretability and predictive performance.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121439649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Precision Matrix Estimation With SQUIC 基于SQUIC的大尺度精度矩阵估计
Pub Date : 2021-08-12 DOI: 10.2139/ssrn.3904001
Aryan Eftekhari, Lisa Gaedke-Merzhaeuser, D. Pasadakis, M. Bollhoefer, S. Scheidegger, O. Schenk
High-dimensional sparse precision matrix estimation is a ubiquitous task in multivariate analysis with applications that cross many disciplines. In this paper, we introduce the SQUIC package, which benefits from superior runtime performance and scalability, significantly exceeding the available state-of-the-art packages. This package is a second-order method that solves the L1--regularized maximum likelihood problem using highly optimized linear algebra subroutines, which leverage the underlying sparsity and the intrinsic parallelism in the computation. We provide two sets of numerical tests; the first one consists of didactic examples using synthetic datasets highlighting the performance and accuracy of the package, and the second one is a real-world classification problem of high dimensional medical datasets. The base algorithm is implemented in C++ with interfaces for R and Python.
高维稀疏精度矩阵估计是多变量分析中普遍存在的一项任务,其应用涉及多个学科。在本文中,我们介绍了SQUIC包,它受益于卓越的运行时性能和可伸缩性,大大超过了现有的最先进的包。这个包是一种二阶方法,它使用高度优化的线性代数子例程来解决L1-正则化的最大似然问题,这些子例程利用了计算中的潜在稀疏性和内在并行性。我们提供了两套数值测试;第一个由使用合成数据集的教学示例组成,突出了包的性能和准确性,第二个是高维医疗数据集的现实世界分类问题。基本算法是用c++实现的,带有R和Python接口。
{"title":"Large-Scale Precision Matrix Estimation With SQUIC","authors":"Aryan Eftekhari, Lisa Gaedke-Merzhaeuser, D. Pasadakis, M. Bollhoefer, S. Scheidegger, O. Schenk","doi":"10.2139/ssrn.3904001","DOIUrl":"https://doi.org/10.2139/ssrn.3904001","url":null,"abstract":"High-dimensional sparse precision matrix estimation is a ubiquitous task in multivariate analysis with applications that cross many disciplines. In this paper, we introduce the SQUIC package, which benefits from superior runtime performance and scalability, significantly exceeding the available state-of-the-art packages. This package is a second-order method that solves the L1--regularized maximum likelihood problem using highly optimized linear algebra subroutines, which leverage the underlying sparsity and the intrinsic parallelism in the computation. We provide two sets of numerical tests; the first one consists of didactic examples using synthetic datasets highlighting the performance and accuracy of the package, and the second one is a real-world classification problem of high dimensional medical datasets. The base algorithm is implemented in C++ with interfaces for R and Python.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121200695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Error Correction Models and Regressions for Non-Cointegrated Variables 非协整变量的误差修正模型与回归
Pub Date : 2021-08-10 DOI: 10.2139/ssrn.3902889
Moawia Alghalith
We introduce valid regression models and valid error correction models for the non-cointegrated variables. These models are also valid for the cointegrated variables. Consequently, cointegration tests and analysis become needless. Furthermore, our approach overcomes the lag selection problem.
对非协整变量引入了有效的回归模型和有效的误差修正模型。这些模型对协整变量也是有效的。因此,协整检验和分析变得不必要。此外,我们的方法克服了滞后选择问题。
{"title":"Error Correction Models and Regressions for Non-Cointegrated Variables","authors":"Moawia Alghalith","doi":"10.2139/ssrn.3902889","DOIUrl":"https://doi.org/10.2139/ssrn.3902889","url":null,"abstract":"We introduce valid regression models and valid error correction models for the non-cointegrated variables. These models are also valid for the cointegrated variables. Consequently, cointegration tests and analysis become needless. Furthermore, our approach overcomes the lag selection problem.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124448062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Further Improving Finite Sample Approximation by Central Limit Theorems for Aggregate Efficiency 集效率中心极限定理对有限样本近似的进一步改进
Pub Date : 2021-08-08 DOI: 10.2139/ssrn.3901240
Shirong Zhao
A simple yet easy to implement method is proposed to further improve the finite sample approximation by central limit theorems for aggregate efficiency. By adopt- ing the correction method in Simar and Zelenyuk (2020, EJOR), we further propose plugging the bias-corrected mean efficiency estimate rather than just mean efficiency estimate, into the variance estimator of aggregate efficiency. In extensive Monte-Carlo experiments, although our newly proposed method is found to have smaller coverages than the method using the true variance, it is found to have larger coverages across virtually all finite sample sizes and across dimensions than the original method in Simar and Zelenyuk (2018,OR) and the correction method in Simar and Zelenyuk (2020, EJOR). A real data set is employed to show the differences between these three methods in the estimated variance and the estimated confidence intervals.
提出了一种简单易行的方法,进一步改进了用中心极限定理求集效率的有限样本近似。通过采用Simar和Zelenyuk (2020, EJOR)的修正方法,我们进一步提出在总效率的方差估计中插入偏差校正后的平均效率估计,而不仅仅是平均效率估计。在广泛的蒙特卡罗实验中,尽管发现我们新提出的方法比使用真实方差的方法具有更小的覆盖率,但发现它在几乎所有有限样本量和跨维度上的覆盖率都大于Simar和Zelenyuk (2018,OR)的原始方法和Simar和Zelenyuk (2020, EJOR)的校正方法。用一个真实的数据集来展示这三种方法在估计方差和估计置信区间上的差异。
{"title":"Further Improving Finite Sample Approximation by Central Limit Theorems for Aggregate Efficiency","authors":"Shirong Zhao","doi":"10.2139/ssrn.3901240","DOIUrl":"https://doi.org/10.2139/ssrn.3901240","url":null,"abstract":"A simple yet easy to implement method is proposed to further improve the finite sample approximation by central limit theorems for aggregate efficiency. By adopt- ing the correction method in Simar and Zelenyuk (2020, EJOR), we further propose plugging the bias-corrected mean efficiency estimate rather than just mean efficiency estimate, into the variance estimator of aggregate efficiency. In extensive Monte-Carlo experiments, although our newly proposed method is found to have smaller coverages than the method using the true variance, it is found to have larger coverages across virtually all finite sample sizes and across dimensions than the original method in Simar and Zelenyuk (2018,OR) and the correction method in Simar and Zelenyuk (2020, EJOR). A real data set is employed to show the differences between these three methods in the estimated variance and the estimated confidence intervals.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125244539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testing for Signal-to-Noise Ratio in Linear Regression: A Test for Big Data Era 线性回归的信噪比测试:大数据时代的测试
Pub Date : 2021-07-12 DOI: 10.2139/ssrn.3884683
Jae H. Kim
This paper proposes a test for the signal-to-noise ratio applicable to a range of significance tests and model diagnostics in a linear regression. It is particularly useful under a large or massive sample size, where a conventional test frequently rejects an economically negligible deviation from the null hypothesis. The test is conducted in the context of the traditional $F$-test, with its critical values increasing with sample size. It maintains desirable size properties under a large or massive sample size, when the null hypothesis is violated by a practically negligible margin.
本文提出了一种适用于一系列显著性检验和线性回归模型诊断的信噪比检验方法。它在大样本或大量样本量下特别有用,在这种情况下,传统检验经常拒绝经济上可以忽略不计的与原假设的偏差。该测试是在传统的$F -测试的背景下进行的,其临界值随着样本量的增加而增加。当零假设被几乎可以忽略的边际违反时,它在大或大量样本量下保持理想的大小特性。
{"title":"Testing for Signal-to-Noise Ratio in Linear Regression: A Test for Big Data Era","authors":"Jae H. Kim","doi":"10.2139/ssrn.3884683","DOIUrl":"https://doi.org/10.2139/ssrn.3884683","url":null,"abstract":"This paper proposes a test for the signal-to-noise ratio applicable to a range of significance tests and model diagnostics in a linear regression. It is particularly useful under a large or massive sample size, where a conventional test frequently rejects an economically negligible deviation from the null hypothesis. The test is conducted in the context of the traditional $F$-test, with its critical values increasing with sample size. It maintains desirable size properties under a large or massive sample size, when the null hypothesis is violated by a practically negligible margin.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"359 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121642166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Display Optimization Under the Multinomial Logit Choice Model: Balancing Revenue and Customer Satisfaction 多项Logit选择模型下的陈列优化:平衡收益与顾客满意度
Pub Date : 2021-06-29 DOI: 10.2139/ssrn.3909033
Jacob B. Feldman, Puping (Phil) Jiang
In this paper, we consider an assortment optimization problem in which a platform must choose pairwise disjoint sets of assortments to offer across a series of T stages. Arriving customers begin their search process in the first stage and progress sequentially through the stages until their patience expires, at which point they make a multinomial-logit-based purchasing decision from among all products they have viewed throughout their search process. The goal is to choose the sequential displays of product offerings to maximize expected revenue. Additionally, we impose stage-specific constraints that ensure that as each customer progresses farther and farther through the T stages, there is a minimum level of “desirability” met by the collections of displayed products. We consider two related measures of desirability: purchase likelihood and expected utility derived from the offered assortments. In this way, the offered sequence of assortment must be both high earning and well-liked, which breaks from the traditional assortment setting, where customer considerations are generally not explicitly accounted for. We show that our assortment problem of interest is strongly NP-Hard, thus ruling out the existence of a fully polynomial-time approximation scheme (FPTAS). From an algorithmic standpoint, as a warm-up, we develop a simple constant factor approximation scheme in which we carefully stitch together myopically selected assortments for each stage. Our main algorithmic result consists of a polynomial-time approximation scheme (PTAS), which combines a handful of structural results related to the make-up of the optimal assortment sequence within an approximate dynamic programming framework.
在本文中,我们考虑了一个分类优化问题,其中平台必须在一系列T阶段中选择成对不相交的分类集。到达的顾客在第一阶段开始他们的搜索过程,然后依次进行,直到他们的耐心耗尽,这时他们从他们在搜索过程中看到的所有产品中做出基于多项逻辑的购买决策。目标是选择产品的顺序显示,以最大限度地提高预期收入。此外,我们施加了特定于阶段的约束,以确保随着每个客户在T阶段中的进展越来越远,所展示的产品集合满足了最低程度的“可取性”。我们考虑了两种相关的可取性度量:购买可能性和期望效用,从提供的分类中得到。通过这种方式,提供的分类序列必须既高收入又受欢迎,这打破了传统的分类设置,在传统的分类设置中,客户的考虑通常没有明确考虑。我们表明,我们感兴趣的分类问题是强NP-Hard的,因此排除了一个完全多项式时间近似方案(FPTAS)的存在。从算法的角度来看,作为热身,我们开发了一个简单的常数因子近似方案,在这个方案中,我们仔细地将每个阶段近视选择的分类拼接在一起。我们的主要算法结果由一个多项式时间近似方案(PTAS)组成,该方案结合了一些与近似动态规划框架内最优分类序列组成相关的结构结果。
{"title":"Display Optimization Under the Multinomial Logit Choice Model: Balancing Revenue and Customer Satisfaction","authors":"Jacob B. Feldman, Puping (Phil) Jiang","doi":"10.2139/ssrn.3909033","DOIUrl":"https://doi.org/10.2139/ssrn.3909033","url":null,"abstract":"In this paper, we consider an assortment optimization problem in which a platform must choose pairwise disjoint sets of assortments to offer across a series of T stages. Arriving customers begin their search process in the first stage and progress sequentially through the stages until their patience expires, at which point they make a multinomial-logit-based purchasing decision from among all products they have viewed throughout their search process. The goal is to choose the sequential displays of product offerings to maximize expected revenue. Additionally, we impose stage-specific constraints that ensure that as each customer progresses farther and farther through the T stages, there is a minimum level of “desirability” met by the collections of displayed products. We consider two related measures of desirability: purchase likelihood and expected utility derived from the offered assortments. In this way, the offered sequence of assortment must be both high earning and well-liked, which breaks from the traditional assortment setting, where customer considerations are generally not explicitly accounted for. We show that our assortment problem of interest is strongly NP-Hard, thus ruling out the existence of a fully polynomial-time approximation scheme (FPTAS). From an algorithmic standpoint, as a warm-up, we develop a simple constant factor approximation scheme in which we carefully stitch together myopically selected assortments for each stage. Our main algorithmic result consists of a polynomial-time approximation scheme (PTAS), which combines a handful of structural results related to the make-up of the optimal assortment sequence within an approximate dynamic programming framework.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122252954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Estimation of a Class of Threshold Regression Models 一类阈值回归模型的估计
Pub Date : 2021-05-21 DOI: 10.2139/ssrn.3850678
Ramamohan Rao
A rich variety of threshold regression models has been in use starting with Tobin (1958). However, several applications indicate the necessity for a class of threshold regression models that have not been considered so far. This note presents a specification of such models and offers a novel method of estimation.
从Tobin(1958)开始,已经使用了各种各样的阈值回归模型。然而,一些应用表明需要一类阈值回归模型,迄今为止还没有考虑到。本文提出了这种模型的规范,并提供了一种新的估计方法。
{"title":"On the Estimation of a Class of Threshold Regression Models","authors":"Ramamohan Rao","doi":"10.2139/ssrn.3850678","DOIUrl":"https://doi.org/10.2139/ssrn.3850678","url":null,"abstract":"A rich variety of threshold regression models has been in use starting with Tobin (1958). However, several applications indicate the necessity for a class of threshold regression models that have not been considered so far. This note presents a specification of such models and offers a novel method of estimation.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125898325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Gaussian Process Model of Cross-Category Dynamics in Brand Choice 品牌选择跨品类动态的高斯过程模型
Pub Date : 2021-04-22 DOI: 10.2139/ssrn.3832290
Ryan Dew, Yuhao Fan
Understanding individual customers’ sensitivities to prices, promotions, brand, and other aspects of the marketing mix is fundamental to a wide swath of marketing problems, including targeting and pricing. Companies that operate across many product categories have a unique opportunity, insofar as they can use purchasing data from one category to augment their insights in another. Such cross-category insights are especially crucial in situations where purchasing data may be rich in one category, and scarce in another. An important aspect of how consumers behave across categories is dynamics: preferences are not stable over time, and changes in individual-level preference parameters in one category may be indicative of changes in other categories, especially if those changes are driven by external factors. Yet, despite the rich history of modeling cross-category preferences, the marketing literature lacks a framework that flexibly accounts for correlated dynamics, or the cross-category interlinkages of individual-level sensitivity dynamics. In this work, we propose such a framework, leveraging individual-level, latent, multi-output Gaussian processes to build a non-parametric Bayesian choice model that allows information sharing of preference parameters across customers, time, and categories. We apply our model to grocery purchase data, and show that our model detects interesting dynamics of customers’ price sensitivities across multiple categories. Managerially, we show that capturing correlated dynamics yields substantial predictive gains, relative to benchmarks. Moreover, we find that capturing correlated dynamics can have implications for understanding changes in consumers preferences over time, and developing targeted marketing strategies based on those dynamics.
了解个人客户对价格、促销、品牌和其他营销组合方面的敏感性,是解决包括目标和定价在内的一系列营销问题的基础。经营多个产品类别的公司有一个独特的机会,因为他们可以使用一个类别的购买数据来增强他们对另一个类别的见解。这种跨品类的洞察在一个品类的采购数据丰富而另一个品类的采购数据稀缺的情况下尤为重要。消费者跨类别行为的一个重要方面是动态的:偏好不会随着时间的推移而稳定,一个类别的个人偏好参数的变化可能预示着其他类别的变化,特别是如果这些变化是由外部因素驱动的。然而,尽管有丰富的跨品类偏好建模历史,营销文献缺乏一个框架,灵活地解释相关动态,或跨品类的相互联系的个人层面的敏感性动态。在这项工作中,我们提出了这样一个框架,利用个人层面的、潜在的、多输出的高斯过程来构建一个非参数贝叶斯选择模型,该模型允许跨客户、时间和类别的偏好参数信息共享。我们将我们的模型应用于杂货购买数据,并表明我们的模型检测到客户对多个类别的价格敏感性的有趣动态。在管理上,我们表明,相对于基准,捕获相关动态会产生实质性的预测收益。此外,我们还发现,捕捉相关动态有助于理解消费者偏好随时间的变化,并根据这些动态制定有针对性的营销策略。
{"title":"A Gaussian Process Model of Cross-Category Dynamics in Brand Choice","authors":"Ryan Dew, Yuhao Fan","doi":"10.2139/ssrn.3832290","DOIUrl":"https://doi.org/10.2139/ssrn.3832290","url":null,"abstract":"Understanding individual customers’ sensitivities to prices, promotions, brand, and other aspects of the marketing mix is fundamental to a wide swath of marketing problems, including targeting and pricing. Companies that operate across many product categories have a unique opportunity, insofar as they can use purchasing data from one category to augment their insights in another. Such cross-category insights are especially crucial in situations where purchasing data may be rich in one category, and scarce in another. An important aspect of how consumers behave across categories is dynamics: preferences are not stable over time, and changes in individual-level preference parameters in one category may be indicative of changes in other categories, especially if those changes are driven by external factors. Yet, despite the rich history of modeling cross-category preferences, the marketing literature lacks a framework that flexibly accounts for correlated dynamics, or the cross-category interlinkages of individual-level sensitivity dynamics. In this work, we propose such a framework, leveraging individual-level, latent, multi-output Gaussian processes to build a non-parametric Bayesian choice model that allows information sharing of preference parameters across customers, time, and categories. We apply our model to grocery purchase data, and show that our model detects interesting dynamics of customers’ price sensitivities across multiple categories. Managerially, we show that capturing correlated dynamics yields substantial predictive gains, relative to benchmarks. Moreover, we find that capturing correlated dynamics can have implications for understanding changes in consumers preferences over time, and developing targeted marketing strategies based on those dynamics.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123164457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PSN: Econometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1