This paper proposes nonparametric kernel-smoothing estimation for panel data to examine the degree of heterogeneity across cross-sectional units. We first estimate the sample mean, autocovariances, and autocorrelations for each unit and then apply kernel smoothing to compute their density functions. The dependence of the kernel estimator on bandwidth makes asymptotic bias of very high order affect the required condition on the relative magnitudes of the cross-sectional sample size (N) and the time-series length (T). In particular, it makes the condition on N and T stronger and more complicated than those typically observed in the long-panel literature without kernel smoothing. We also consider a split-panel jackknife method to correct bias and construction of confidence intervals. An empirical application illustrates our procedure.
{"title":"Kernel Estimation for Panel Data with Heterogeneous Dynamics","authors":"R. Okui, Takahide Yanagi","doi":"10.2139/ssrn.3128885","DOIUrl":"https://doi.org/10.2139/ssrn.3128885","url":null,"abstract":"\u0000 This paper proposes nonparametric kernel-smoothing estimation for panel data to examine the degree of heterogeneity across cross-sectional units. We first estimate the sample mean, autocovariances, and autocorrelations for each unit and then apply kernel smoothing to compute their density functions. The dependence of the kernel estimator on bandwidth makes asymptotic bias of very high order affect the required condition on the relative magnitudes of the cross-sectional sample size (N) and the time-series length (T). In particular, it makes the condition on N and T stronger and more complicated than those typically observed in the long-panel literature without kernel smoothing. We also consider a split-panel jackknife method to correct bias and construction of confidence intervals. An empirical application illustrates our procedure.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86110767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I develop and apply a nonparametric approach to estimate demand in differentiated products markets. Estimating demand flexibly is key to addressing many questions in economics that hinge on the shape - and notably the curvature - of market demand functions. My approach applies to standard discrete choice settings, but accommodates a broader range of consumer behaviors and preferences, including complementarities across goods, consumer inattention, and consumer loss aversion. Further, no distributional assumptions are made on the unobservables and only limited functional form restrictions are imposed. Using California grocery store data, I apply my approach to perform two counterfactual exercises: quantifying the pass-through of a tax, and assessing how much the multi-product nature of sellers contributes to markups. In both cases, I find that estimating demand flexibly has a significant impact on the results relative to a standard random coefficients discrete choice model, and I highlight how the outcomes relate to the estimated shape of the demand functions.
{"title":"Nonparametric Demand Estimation in Differentiated Products Markets","authors":"Giovanni Compiani","doi":"10.2139/ssrn.3134152","DOIUrl":"https://doi.org/10.2139/ssrn.3134152","url":null,"abstract":"I develop and apply a nonparametric approach to estimate demand in differentiated products markets. Estimating demand flexibly is key to addressing many questions in economics that hinge on the shape - and notably the curvature - of market demand functions. My approach applies to standard discrete choice settings, but accommodates a broader range of consumer behaviors and preferences, including complementarities across goods, consumer inattention, and consumer loss aversion. Further, no distributional assumptions are made on the unobservables and only limited functional form restrictions are imposed. Using California grocery store data, I apply my approach to perform two counterfactual exercises: quantifying the pass-through of a tax, and assessing how much the multi-product nature of sellers contributes to markups. In both cases, I find that estimating demand flexibly has a significant impact on the results relative to a standard random coefficients discrete choice model, and I highlight how the outcomes relate to the estimated shape of the demand functions.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87046513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper attempts to estimate inequality of opportunities in Punjab, Pakistan by using non-parametric approach. Household level data of Multiple Indicator Cluster Survey 2014 has been analyzed for this purpose. Household head’s income has been taken as an outcome. Three parental characteristics of household head have been used as circumstances. These characteristics include region of residence (rural/urban), wealth status and education level of household head’s father. Equalization of circumstances has been done by dividing our sample into different groups on the basis of above mentioned circumstances. Then within-group and among-groups inequality of income has been calculated. Within-group inequality has been attributed to the differences in the efforts of household heads. Among-group inequality has been attributed to the difference of circumstances and has been termed as inequality of opportunities. Our results indicate that up to 28% variation in income is due to the differences of circumstances. Among different circumstances, father’s education has the most significant contribution in explaining the variation of income of household heads. The study highlights the significance and need of compensatory government policies to cope with the problem of inequality of opportunities in Punjab (Pakistan). Provision of equal access to educational opportunities for all segments of the society is recommended as an important public policy measure to mitigate inequality of opportunities.
{"title":"Estimating Inequality of Opportunities in Punjab (Pakistan): A Non Parametric Approach","authors":"Zahid Pervaiz, Shahla Akram","doi":"10.2139/ssrn.3213926","DOIUrl":"https://doi.org/10.2139/ssrn.3213926","url":null,"abstract":"This paper attempts to estimate inequality of opportunities in Punjab, Pakistan by using non-parametric approach. Household level data of Multiple Indicator Cluster Survey 2014 has been analyzed for this purpose. Household head’s income has been taken as an outcome. Three parental characteristics of household head have been used as circumstances. These characteristics include region of residence (rural/urban), wealth status and education level of household head’s father. Equalization of circumstances has been done by dividing our sample into different groups on the basis of above mentioned circumstances. Then within-group and among-groups inequality of income has been calculated. Within-group inequality has been attributed to the differences in the efforts of household heads. Among-group inequality has been attributed to the difference of circumstances and has been termed as inequality of opportunities. Our results indicate that up to 28% variation in income is due to the differences of circumstances. Among different circumstances, father’s education has the most significant contribution in explaining the variation of income of household heads. The study highlights the significance and need of compensatory government policies to cope with the problem of inequality of opportunities in Punjab (Pakistan). Provision of equal access to educational opportunities for all segments of the society is recommended as an important public policy measure to mitigate inequality of opportunities.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a nonparametric local volatility Cheyette model and apply it to pricing interest rate swaptions. Concretely, given market prices of swaptions, we show how to construct a unique diffusion process consistent with these prices. We then link the resulting local volatility to the dynamics of the entire interest rate curve. The model preserves completeness and allows consistent pricing of illiquid, out-of-the-money and exotic interest rate products. The model is relatively straightforward to implement and calibrate and less involved than stochastic volatility approaches.
{"title":"A Nonparametric Local Volatility Model for Swaptions Smile","authors":"D. Gatarek, J. Jabłecki","doi":"10.21314/JCF.2018.343","DOIUrl":"https://doi.org/10.21314/JCF.2018.343","url":null,"abstract":"We propose a nonparametric local volatility Cheyette model and apply it to pricing interest rate swaptions. Concretely, given market prices of swaptions, we show how to construct a unique diffusion process consistent with these prices. We then link the resulting local volatility to the dynamics of the entire interest rate curve. The model preserves completeness and allows consistent pricing of illiquid, out-of-the-money and exotic interest rate products. The model is relatively straightforward to implement and calibrate and less involved than stochastic volatility approaches.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72991566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This note reconciles existing evidence on the abilities of the bootstrap with its use in the cost-effectiveness literature. We emphasise the role played by pivotal statistics to explain the ability of the bootstrap to provide asymptotic refinements for the Incremental Net Benefit statistic. The discussion is illustrated with a Monte Carlo experiment.
{"title":"A Note on the Use of the Bootstrap for Cost Effectiveness Analysis.","authors":"Eduardo Fé, S. Peters","doi":"10.2139/ssrn.2994865","DOIUrl":"https://doi.org/10.2139/ssrn.2994865","url":null,"abstract":"This note reconciles existing evidence on the abilities of the bootstrap with its use in the cost-effectiveness literature. We emphasise the role played by pivotal statistics to explain the ability of the bootstrap to provide asymptotic refinements for the Incremental Net Benefit statistic. The discussion is illustrated with a Monte Carlo experiment.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74187387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a novel method to identify the conditional average treatment effect partial derivative (CATE-PD) in an environment in which the treatment is endogenous, the treatment effect is heterogeneous, the candidate 'instrumental variables' can be correlated with latent errors, and the treatment selection does not need to be (weakly) monotone. We show that CATE-PD is point-identified under mild conditions if two-way exclusion restrictions exist: (a) an outcome-exclusive variable, which affects the treatment but is excluded from the potential outcome equation, and (b) a treatment-exclusive variable, which affects the potential outcome but is excluded from the selection equation. We also propose an asymptotically normal two-step estimator and illustrate our method by investigating how the return to education varies across regions at different levels of development in China.
{"title":"Two-Way Exclusion Restrictions in Models with Heterogeneous Treatment Effects","authors":"Shenglong Liu, Ismael Mourifié, Yuanyuan Wan","doi":"10.2139/ssrn.2761986","DOIUrl":"https://doi.org/10.2139/ssrn.2761986","url":null,"abstract":"\u0000 In this paper, we propose a novel method to identify the conditional average treatment effect partial derivative (CATE-PD) in an environment in which the treatment is endogenous, the treatment effect is heterogeneous, the candidate 'instrumental variables' can be correlated with latent errors, and the treatment selection does not need to be (weakly) monotone. We show that CATE-PD is point-identified under mild conditions if two-way exclusion restrictions exist: (a) an outcome-exclusive variable, which affects the treatment but is excluded from the potential outcome equation, and (b) a treatment-exclusive variable, which affects the potential outcome but is excluded from the selection equation. We also propose an asymptotically normal two-step estimator and illustrate our method by investigating how the return to education varies across regions at different levels of development in China.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84056615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides sufficient conditions for non-parametrically identifying dynamic games with incomplete information, allowing for both multiple equilibria and unobserved heterogeneity. The identification proceeds in two steps. The first step mainly involves identifying the equilibrium conditional choice probabilities and the state transitions using results developed in the measurement error literature. The existing measurement error literature relies on monotonicity assumptions to determine the order of the latent types. This paper, in contrast, explores the identification structure to match the order, which is important for identifying the payoff primitives. The second step follows existing literature to identify the payoff parameters based on the equilibrium conditions with exclusion restrictions. Multiple equilibria and unobserved heterogeneity can be distinguished through comparison of payoff primitives.
{"title":"Nonparametric Identification of Dynamic Games with Multiple Equilibria and Unobserved Heterogeneity","authors":"Ruli Xiao","doi":"10.2139/ssrn.2757272","DOIUrl":"https://doi.org/10.2139/ssrn.2757272","url":null,"abstract":"This paper provides sufficient conditions for non-parametrically identifying dynamic games with incomplete information, allowing for both multiple equilibria and unobserved heterogeneity. The identification proceeds in two steps. The first step mainly involves identifying the equilibrium conditional choice probabilities and the state transitions using results developed in the measurement error literature. The existing measurement error literature relies on monotonicity assumptions to determine the order of the latent types. This paper, in contrast, explores the identification structure to match the order, which is important for identifying the payoff primitives. The second step follows existing literature to identify the payoff parameters based on the equilibrium conditions with exclusion restrictions. Multiple equilibria and unobserved heterogeneity can be distinguished through comparison of payoff primitives.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80265358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces nonparametric econometric methods that characterize general power law distributions under basic stability conditions. These methods extend the literature on power laws in the social sciences in several directions. First, we show that any stationary distribution in a random growth setting is shaped entirely by two factors - the idiosyncratic volatilities and reversion rates (a measure of cross-sectional mean reversion) for different ranks in the distribution. This result is valid regardless of how growth rates and volatilities vary across different economic agents, and hence applies to Gibrat's law and its extensions. Second, we present techniques to estimate these two factors using panel data. Third, we show how our results offer a structural explanation for a generalized size effect in which higher-ranked processes grow more slowly than lower-ranked processes on average. Finally, we employ our empirical methods using panel data on commodity prices and show that our techniques accurately describe the empirical distribution of relative commodity prices. We also show the existence of a generalized "size" effect for commodities, as predicted by our econometric theory.
{"title":"Empirical Methods for Dynamic Power Law Distributions in the Social Sciences","authors":"Ricardo T. Fernholz","doi":"10.2139/ssrn.2735847","DOIUrl":"https://doi.org/10.2139/ssrn.2735847","url":null,"abstract":"This paper introduces nonparametric econometric methods that characterize general power law distributions under basic stability conditions. These methods extend the literature on power laws in the social sciences in several directions. First, we show that any stationary distribution in a random growth setting is shaped entirely by two factors - the idiosyncratic volatilities and reversion rates (a measure of cross-sectional mean reversion) for different ranks in the distribution. This result is valid regardless of how growth rates and volatilities vary across different economic agents, and hence applies to Gibrat's law and its extensions. Second, we present techniques to estimate these two factors using panel data. Third, we show how our results offer a structural explanation for a generalized size effect in which higher-ranked processes grow more slowly than lower-ranked processes on average. Finally, we employ our empirical methods using panel data on commodity prices and show that our techniques accurately describe the empirical distribution of relative commodity prices. We also show the existence of a generalized \"size\" effect for commodities, as predicted by our econometric theory.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80736672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents sharp bounds on the identified set for classical factor models and non-parametric topic models based on results from the non-negative factorization literature. It compares the standard assumption (for factor models) of orthonormality of the factors (principal components analysis) to the "natural" assumption of topic models of additivity and non-negativity. For the former, the model is point identified when the number of factors is "small" but further restrictions such as those presented in Bai and Ng (2013) are needed to identify larger models. Under the latter, the paper characterizes the identified set and shows the necessary condition for point identification presented in Huang et al (2013) is also sufficient. In the two factor case this condition states that for each latent factor there must be some asset whose return gives it zero weight and there must be some time periods where each factor's normalized return is zero. These "sparsity" conditions are characteristics of the observed data, not assumptions on the data generating process. The paper presents a "least squares" estimator where the number of parameters to be estimated is not increasing in the size of the data set. The paper shows that this estimator is consistent both when the number time periods increases in the factor model and when the number of documents increases in the topic model. Unlike the similar estimator presented in the classical factor model literature (Stock and Watson (2002), Bai (2003)) this estimator does not rely on orthonormality.
本文基于非负因子分解文献的结果,给出了经典因子模型和非参数主题模型的识别集的明确界限。将因子(主成分分析)正交性的标准假设与可加性和非负性的主题模型的“自然”假设进行了比较。对于前者,模型是在因素数量“小”时确定的,但需要进一步的限制,如Bai和Ng(2013)提出的限制,以确定更大的模型。在后一种情况下,本文对识别集进行了表征,并证明Huang et al(2013)提出的点识别的必要条件也是充分的。在两个因素的情况下,这个条件表明,对于每个潜在因素,必须有一些资产的回报使其权重为零,并且必须有一些时间段,每个因素的标准化回报为零。这些“稀疏性”条件是观测数据的特征,而不是数据生成过程的假设。本文提出了一种“最小二乘”估计量,其中待估计参数的数量不随数据集的大小而增加。研究表明,无论在因子模型中时间段数量增加,还是在主题模型中文档数量增加时,该估计量都是一致的。与经典因子模型文献(Stock and Watson (2002), Bai(2003))中提出的类似估计量不同,该估计量不依赖于正交性。
{"title":"Set Identification and Estimation of Factor and Topic Models","authors":"C. Adams","doi":"10.2139/ssrn.2685218","DOIUrl":"https://doi.org/10.2139/ssrn.2685218","url":null,"abstract":"The paper presents sharp bounds on the identified set for classical factor models and non-parametric topic models based on results from the non-negative factorization literature. It compares the standard assumption (for factor models) of orthonormality of the factors (principal components analysis) to the \"natural\" assumption of topic models of additivity and non-negativity. For the former, the model is point identified when the number of factors is \"small\" but further restrictions such as those presented in Bai and Ng (2013) are needed to identify larger models. Under the latter, the paper characterizes the identified set and shows the necessary condition for point identification presented in Huang et al (2013) is also sufficient. In the two factor case this condition states that for each latent factor there must be some asset whose return gives it zero weight and there must be some time periods where each factor's normalized return is zero. These \"sparsity\" conditions are characteristics of the observed data, not assumptions on the data generating process. The paper presents a \"least squares\" estimator where the number of parameters to be estimated is not increasing in the size of the data set. The paper shows that this estimator is consistent both when the number time periods increases in the factor model and when the number of documents increases in the topic model. Unlike the similar estimator presented in the classical factor model literature (Stock and Watson (2002), Bai (2003)) this estimator does not rely on orthonormality.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85713379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krugman (1991)'s target zone model for exchange rate dynamics has become the reference of a large part of this literature. Despite its simplicity and elegance, empirical evidence has been lacking, not least because it is difficult to capture the predicted non-linear relationship between the observable exchange rate and the non-observable fundamental value. This is why we propose a different approach. By inverting locally the relation between exchange rate and fundamental value, we derive analytical expressions for the conditional volatility and the probability density as a function of the exchange rate. This allows us to examine Krugman's prediction directly from historical data, and, furthermore, enables us to test the smooth pasting condition, which is intimately related to the no-arbitrage condition. Concretely, we study the performance of the euro/Swiss franc exchange rate in the extraordinary period from September 6, 2011 to January 15, 2015, when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. We show that the data are well explained by the theory and conclude that Krugman's target zone model holds after all, but apparently only under extreme and sustained pressure that pushes continuously the exchange rate very close to the boundary of the target zone.
{"title":"Quantitative Modelling of the EUR/CHF Exchange Rate during the Target Zone Regime of September 2011 to January 2015","authors":"S. Lera, D. Sornette","doi":"10.2139/ssrn.2634425","DOIUrl":"https://doi.org/10.2139/ssrn.2634425","url":null,"abstract":"Krugman (1991)'s target zone model for exchange rate dynamics has become the reference of a large part of this literature. Despite its simplicity and elegance, empirical evidence has been lacking, not least because it is difficult to capture the predicted non-linear relationship between the observable exchange rate and the non-observable fundamental value. This is why we propose a different approach. By inverting locally the relation between exchange rate and fundamental value, we derive analytical expressions for the conditional volatility and the probability density as a function of the exchange rate. This allows us to examine Krugman's prediction directly from historical data, and, furthermore, enables us to test the smooth pasting condition, which is intimately related to the no-arbitrage condition. Concretely, we study the performance of the euro/Swiss franc exchange rate in the extraordinary period from September 6, 2011 to January 15, 2015, when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. We show that the data are well explained by the theory and conclude that Krugman's target zone model holds after all, but apparently only under extreme and sustained pressure that pushes continuously the exchange rate very close to the boundary of the target zone.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76135159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}