Pub Date : 2023-10-05DOI: 10.1007/s10182-023-00481-z
Ton de Waal, Jacco Daalmans
Non-response is a major problem for anyone collecting and processing data. A commonly used technique to deal with missing data is imputation, where missing values are estimated and filled in into the dataset. Imputation can become challenging if the variable to be imputed has to comply with a known total. Even more challenging is the case where several variables in the same dataset need to be imputed and, in addition to known totals, logical restrictions between variables have to be satisfied. In our paper, we develop an approach for a broad class of imputation methods for multivariate categorical data such that previously published totals are preserved while logical restrictions on the data are satisfied. The developed approach can be used in combination with any imputation model that estimates imputation probabilities, i.e. the probability that imputation of a certain category for a variable in a certain unit leads to the correct value for this variable and unit.
{"title":"Calibrated imputation for multivariate categorical data","authors":"Ton de Waal, Jacco Daalmans","doi":"10.1007/s10182-023-00481-z","DOIUrl":"10.1007/s10182-023-00481-z","url":null,"abstract":"<div><p>Non-response is a major problem for anyone collecting and processing data. A commonly used technique to deal with missing data is imputation, where missing values are estimated and filled in into the dataset. Imputation can become challenging if the variable to be imputed has to comply with a known total. Even more challenging is the case where several variables in the same dataset need to be imputed and, in addition to known totals, logical restrictions between variables have to be satisfied. In our paper, we develop an approach for a broad class of imputation methods for multivariate categorical data such that previously published totals are preserved while logical restrictions on the data are satisfied. The developed approach can be used in combination with any imputation model that estimates imputation probabilities, i.e. the probability that imputation of a certain category for a variable in a certain unit leads to the correct value for this variable and unit.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 3","pages":"545 - 576"},"PeriodicalIF":1.4,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00481-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-22DOI: 10.1007/s10182-023-00479-7
Markus Loecher
Black box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in variable importance algorithms are known to be strongly biased toward high-entropy features. It was recently shown that the increasingly popular SHAP (SHapley Additive exPlanations) values suffer from a similar bias. We propose debiased or "shrunk" SHAP scores based on sample splitting which additionally enable the detection of overfitting issues at the feature level.
{"title":"Debiasing SHAP scores in random forests","authors":"Markus Loecher","doi":"10.1007/s10182-023-00479-7","DOIUrl":"10.1007/s10182-023-00479-7","url":null,"abstract":"<div><p>Black box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in variable importance algorithms are known to be strongly biased toward high-entropy features. It was recently shown that the increasingly popular SHAP (SHapley Additive exPlanations) values suffer from a similar bias. We propose debiased or \"shrunk\" SHAP scores based on sample splitting which additionally enable the detection of overfitting issues at the feature level.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"427 - 440"},"PeriodicalIF":1.4,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00479-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48943594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-15DOI: 10.1007/s10182-023-00478-8
Antonio Di Noia, Marzia Marcheselli, Caterina Pisani, Luca Pratelli
A family of consistent tests, derived from a characterization of the probability generating function, is proposed for assessing Poissonity against a wide class of count distributions, which includes some of the most frequently adopted alternatives to the Poisson distribution. Actually, the family of test statistics is based on the difference between the plug-in estimator of the Poisson cumulative distribution function and the empirical cumulative distribution function. The test statistics have an intuitive and simple form and are asymptotically normally distributed, allowing a straightforward implementation of the test. The finite sample properties of the test are investigated by means of an extensive simulation study. The test shows satisfactory behaviour compared to other tests with known limit distribution.
{"title":"A family of consistent normally distributed tests for Poissonity","authors":"Antonio Di Noia, Marzia Marcheselli, Caterina Pisani, Luca Pratelli","doi":"10.1007/s10182-023-00478-8","DOIUrl":"10.1007/s10182-023-00478-8","url":null,"abstract":"<div><p>A family of consistent tests, derived from a characterization of the probability generating function, is proposed for assessing Poissonity against a wide class of count distributions, which includes some of the most frequently adopted alternatives to the Poisson distribution. Actually, the family of test statistics is based on the difference between the plug-in estimator of the Poisson cumulative distribution function and the empirical cumulative distribution function. The test statistics have an intuitive and simple form and are asymptotically normally distributed, allowing a straightforward implementation of the test. The finite sample properties of the test are investigated by means of an extensive simulation study. The test shows satisfactory behaviour compared to other tests with known limit distribution.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"209 - 223"},"PeriodicalIF":1.4,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00478-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48755643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.1007/s10182-023-00475-x
Katarina Halaj, Bojana Milošević, Marko Obradović, M. Dolores Jiménez-Gamero
This paper uses independence-type characterizations to propose a class of test statistics which can be used for testing goodness-of-fit with several classes of null distributions. The resulting tests are consistent against fixed alternatives. Some limiting and small sample properties of the test statistics are explored. In comparison with common universal goodness-of-fit tests, the new tests exhibit better power for most of the alternatives considered, while in comparison with another characterization-based procedure, the new tests provide competitive or comparable power in various simulation settings. The handiness of the proposed tests is demonstrated through several real-data examples.
{"title":"Correlation-type goodness-of-fit tests based on independence characterizations","authors":"Katarina Halaj, Bojana Milošević, Marko Obradović, M. Dolores Jiménez-Gamero","doi":"10.1007/s10182-023-00475-x","DOIUrl":"10.1007/s10182-023-00475-x","url":null,"abstract":"<div><p>This paper uses independence-type characterizations to propose a class of test statistics which can be used for testing goodness-of-fit with several classes of null distributions. The resulting tests are consistent against fixed alternatives. Some limiting and small sample properties of the test statistics are explored. In comparison with common universal goodness-of-fit tests, the new tests exhibit better power for most of the alternatives considered, while in comparison with another characterization-based procedure, the new tests provide competitive or comparable power in various simulation settings. The handiness of the proposed tests is demonstrated through several real-data examples.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"185 - 207"},"PeriodicalIF":1.4,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41779980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.1007/s10182-023-00477-9
Kristin Blesch, David S. Watson, Marvin N. Wright
Despite the popularity of feature importance (FI) measures in interpretable machine learning, the statistical adequacy of these methods is rarely discussed. From a statistical perspective, a major distinction is between analysing a variable’s importance before and after adjusting for covariates—i.e., between marginal and conditional measures. Our work draws attention to this rarely acknowledged, yet crucial distinction and showcases its implications. We find that few methods are available for testing conditional FI and practitioners have hitherto been severely restricted in method application due to mismatched data requirements. Most real-world data exhibits complex feature dependencies and incorporates both continuous and categorical features (i.e., mixed data). Both properties are oftentimes neglected by conditional FI measures. To fill this gap, we propose to combine the conditional predictive impact (CPI) framework with sequential knockoff sampling. The CPI enables conditional FI measurement that controls for any feature dependencies by sampling valid knockoffs—hence, generating synthetic data with similar statistical properties—for the data to be analysed. Sequential knockoffs were deliberately designed to handle mixed data and thus allow us to extend the CPI approach to such datasets. We demonstrate through numerous simulations and a real-world example that our proposed workflow controls type I error, achieves high power, and is in-line with results given by other conditional FI measures, whereas marginal FI metrics can result in misleading interpretations. Our findings highlight the necessity of developing statistically adequate, specialized methods for mixed data.
尽管特征重要性(FI)测量方法在可解释机器学习中很受欢迎,但很少有人讨论这些方法的统计充分性。从统计学的角度来看,一个主要的区别在于分析变量在调整协变量之前和之后的重要性,即边际测量和条件测量之间的区别。我们的研究提请人们注意这一鲜为人知但却至关重要的区别,并展示其影响。我们发现,目前可用来测试条件 FI 的方法很少,而且由于数据要求不匹配,从业人员在方法应用方面一直受到严重限制。现实世界中的大多数数据都表现出复杂的特征依赖性,同时包含连续和分类特征(即混合数据)。条件 FI 方法往往忽略了这两种特性。为了填补这一空白,我们建议将条件预测影响(CPI)框架与连续山寨抽样相结合。条件预测影响(CPI)通过对有效的山寨产品进行采样,从而生成与待分析数据具有相似统计属性的合成数据,从而实现条件预测影响测量,并控制任何特征依赖性。我们特意设计了连续山寨数据来处理混合数据,因此可以将 CPI 方法扩展到此类数据集。我们通过大量模拟和一个真实世界的例子证明,我们提出的工作流程可以控制 I 型误差,实现高功率,并且与其他条件 FI 指标给出的结果一致,而边际 FI 指标可能会导致误导性解释。我们的研究结果凸显了为混合数据开发统计充分的专门方法的必要性。
{"title":"Conditional feature importance for mixed data","authors":"Kristin Blesch, David S. Watson, Marvin N. Wright","doi":"10.1007/s10182-023-00477-9","DOIUrl":"10.1007/s10182-023-00477-9","url":null,"abstract":"<div><p>Despite the popularity of feature importance (FI) measures in interpretable machine learning, the statistical adequacy of these methods is rarely discussed. From a statistical perspective, a major distinction is between analysing a variable’s importance before and after adjusting for covariates—i.e., between <i>marginal</i> and <i>conditional</i> measures. Our work draws attention to this rarely acknowledged, yet crucial distinction and showcases its implications. We find that few methods are available for testing conditional FI and practitioners have hitherto been severely restricted in method application due to mismatched data requirements. Most real-world data exhibits complex feature dependencies and incorporates both continuous and categorical features (i.e., mixed data). Both properties are oftentimes neglected by conditional FI measures. To fill this gap, we propose to combine the conditional predictive impact (CPI) framework with sequential knockoff sampling. The CPI enables conditional FI measurement that controls for any feature dependencies by sampling valid knockoffs—hence, generating synthetic data with similar statistical properties—for the data to be analysed. Sequential knockoffs were deliberately designed to handle mixed data and thus allow us to extend the CPI approach to such datasets. We demonstrate through numerous simulations and a real-world example that our proposed workflow controls type I error, achieves high power, and is in-line with results given by other conditional FI measures, whereas marginal FI metrics can result in misleading interpretations. Our findings highlight the necessity of developing statistically adequate, specialized methods for mixed data.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"259 - 278"},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00477-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77609605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1007/s10182-023-00474-y
Marta Ferreira
The extreme value theory (EVT) encompasses a set of methods that allow inferring about the risk inherent to various phenomena in the scope of economic, financial, actuarial, environmental, hydrological, climatic sciences, as well as various areas of engineering. In many situations the clustering effect of high values may have an impact on the risk of occurrence of extreme phenomena. For example, extreme temperatures that last over time and result in drought situations, the permanence of intense rains leading to floods, stock markets in successive falls and consequent catastrophic losses. The extremal index is a measure of EVT associated with the degree of clustering of extreme values. In many situations, and under certain conditions, it corresponds to the arithmetic inverse of the average size of high-value clusters. The estimation of the extremal index generally entails two sources of uncertainty: the level at which high observations are considered and the identification of clusters. There are several contributions in the literature on the estimation of the extremal index, including methodologies to overcome the aforementioned sources of uncertainty. In this work we will revisit several existing estimators, apply automatic choice methods, both for the threshold and for the clustering parameter, and compare the performance of the methods. We will end with an application to meteorological data.
{"title":"Clustering of extreme values: estimation and application","authors":"Marta Ferreira","doi":"10.1007/s10182-023-00474-y","DOIUrl":"10.1007/s10182-023-00474-y","url":null,"abstract":"<div><p>The extreme value theory (EVT) encompasses a set of methods that allow inferring about the risk inherent to various phenomena in the scope of economic, financial, actuarial, environmental, hydrological, climatic sciences, as well as various areas of engineering. In many situations the clustering effect of high values may have an impact on the risk of occurrence of extreme phenomena. For example, extreme temperatures that last over time and result in drought situations, the permanence of intense rains leading to floods, stock markets in successive falls and consequent catastrophic losses. The extremal index is a measure of EVT associated with the degree of clustering of extreme values. In many situations, and under certain conditions, it corresponds to the arithmetic inverse of the average size of high-value clusters. The estimation of the extremal index generally entails two sources of uncertainty: the level at which high observations are considered and the identification of clusters. There are several contributions in the literature on the estimation of the extremal index, including methodologies to overcome the aforementioned sources of uncertainty. In this work we will revisit several existing estimators, apply automatic choice methods, both for the threshold and for the clustering parameter, and compare the performance of the methods. We will end with an application to meteorological data.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"101 - 125"},"PeriodicalIF":1.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10064624/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9769919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an M-quantile regression approach to address the heterogeneity of the housing market in a modern European city. We show how M-quantile modelling is a rich and flexible tool for empirical market price data analysis, allowing us to obtain a robust estimation of the hedonic price function whilst accounting for different sources of heterogeneity in market prices. The suggested methodology can generally be used to analyse nonlinear interactions between prices and predictors. In particular, we develop a spatial semiparametric M-quantile model to capture both the potential nonlinear effects of the cultural environment on pricing and spatial trends. In both cases, nonlinearity is introduced into the model using appropriate bases functions. We show how the implicit price associated with the variable that measures cultural amenities can be determined in this semiparametric framework. Our findings show that the effect of several housing attributes and urban amenities differs significantly across the response distribution, suggesting that buyers of lower-priced properties behave differently than buyers of higher-priced properties.
{"title":"A spatial semiparametric M-quantile regression for hedonic price modelling","authors":"Francesco Schirripa Spagnolo, Riccardo Borgoni, Antonella Carcagnì, Alessandra Michelangeli, Nicola Salvati","doi":"10.1007/s10182-023-00476-w","DOIUrl":"10.1007/s10182-023-00476-w","url":null,"abstract":"<div><p>This paper proposes an M-quantile regression approach to address the heterogeneity of the housing market in a modern European city. We show how M-quantile modelling is a rich and flexible tool for empirical market price data analysis, allowing us to obtain a robust estimation of the hedonic price function whilst accounting for different sources of heterogeneity in market prices. The suggested methodology can generally be used to analyse nonlinear interactions between prices and predictors. In particular, we develop a spatial semiparametric M-quantile model to capture both the potential nonlinear effects of the cultural environment on pricing and spatial trends. In both cases, nonlinearity is introduced into the model using appropriate bases functions. We show how the implicit price associated with the variable that measures cultural amenities can be determined in this semiparametric framework. Our findings show that the effect of several housing attributes and urban amenities differs significantly across the response distribution, suggesting that buyers of lower-priced properties behave differently than buyers of higher-priced properties.\u0000</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"159 - 183"},"PeriodicalIF":1.4,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00476-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41823433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-29DOI: 10.1007/s10182-023-00473-z
Giovanni Saraceno, Abhik Ghosh, Ayanendranath Basu, Claudio Agostinelli
Many real-life data sets can be analyzed using linear mixed models (LMMs). Since these are ordinarily based on normality assumptions, under small deviations from the model the inference can be highly unstable when the associated parameters are estimated by classical methods. On the other hand, the density power divergence (DPD) family, which measures the discrepancy between two probability density functions, has been successfully used to build robust estimators with high stability associated with minimal loss in efficiency. Here, we develop the minimum DPD estimator (MDPDE) for independent but non-identically distributed observations for LMMs according to the variance components model. We prove that the theoretical properties hold, including consistency and asymptotic normality of the estimators. The influence function and sensitivity measures are computed to explore the robustness properties. As a data-based choice of the MDPDE tuning parameter (alpha) is very important, we propose two candidates as “optimal” choices, where optimality is in the sense of choosing the strongest downweighting that is necessary for the particular data set. We conduct a simulation study comparing the proposed MDPDE, for different values of (alpha), with S-estimators, M-estimators and the classical maximum likelihood estimator, considering different levels of contamination. Finally, we illustrate the performance of our proposal on a real-data example.
{"title":"Robust estimation of fixed effect parameters and variances of linear mixed models: the minimum density power divergence approach","authors":"Giovanni Saraceno, Abhik Ghosh, Ayanendranath Basu, Claudio Agostinelli","doi":"10.1007/s10182-023-00473-z","DOIUrl":"10.1007/s10182-023-00473-z","url":null,"abstract":"<div><p>Many real-life data sets can be analyzed using linear mixed models (LMMs). Since these are ordinarily based on normality assumptions, under small deviations from the model the inference can be highly unstable when the associated parameters are estimated by classical methods. On the other hand, the density power divergence (DPD) family, which measures the discrepancy between two probability density functions, has been successfully used to build robust estimators with high stability associated with minimal loss in efficiency. Here, we develop the minimum DPD estimator (MDPDE) for independent but non-identically distributed observations for LMMs according to the variance components model. We prove that the theoretical properties hold, including consistency and asymptotic normality of the estimators. The influence function and sensitivity measures are computed to explore the robustness properties. As a data-based choice of the MDPDE tuning parameter <span>(alpha)</span> is very important, we propose two candidates as “optimal” choices, where optimality is in the sense of choosing the strongest downweighting that is necessary for the particular data set. We conduct a simulation study comparing the proposed MDPDE, for different values of <span>(alpha)</span>, with S-estimators, M-estimators and the classical maximum likelihood estimator, considering different levels of contamination. Finally, we illustrate the performance of our proposal on a real-data example.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"127 - 157"},"PeriodicalIF":1.4,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00473-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47139711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-20DOI: 10.1007/s10182-023-00472-0
Marzia Freo, Alessandra Luati
Communication through websites is often characterised by short texts, made of few words, such as image captions or tweets. This paper explores the class of supervised learning methods for the analysis of short texts, as an alternative to unsupervised methods, widely employed to infer topics from structured texts. The aim is to assess the effectiveness of text data in social sciences, when they are used as explanatory variables in regression models. To this purpose, we compare different variable selection procedures when text regression models are fitted to real, short, text data. We discuss the results obtained by several variants of lasso, screening-based methods and randomisation-based models, such as sure independence screening and stability selection, in terms of number and importance of selected variables, assessed through goodness-of-fit measures, inclusion frequency and model class reliance. Latent Dirichlet allocation results are also considered as a term of comparison. Our perspective is primarily empirical and our starting point is the analysis of two real case studies, though bootstrap replications of each dataset are considered. The first case study aims at explaining price variations based on the information contained in the description of items on sale on e-commerce platforms. The second regards open questions in surveys on satisfaction ratings. The case studies are different in nature and representative of different kinds of short texts, as, in one case, a concise descriptive text is considered, whereas, in the other case, the text expresses an opinion.
{"title":"Lasso-based variable selection methods in text regression: the case of short texts","authors":"Marzia Freo, Alessandra Luati","doi":"10.1007/s10182-023-00472-0","DOIUrl":"10.1007/s10182-023-00472-0","url":null,"abstract":"<div><p>Communication through websites is often characterised by short texts, made of few words, such as image captions or tweets. This paper explores the class of supervised learning methods for the analysis of short texts, as an alternative to unsupervised methods, widely employed to infer topics from structured texts. The aim is to assess the effectiveness of text data in social sciences, when they are used as explanatory variables in regression models. To this purpose, we compare different variable selection procedures when text regression models are fitted to real, short, text data. We discuss the results obtained by several variants of lasso, screening-based methods and randomisation-based models, such as sure independence screening and stability selection, in terms of number and importance of selected variables, assessed through goodness-of-fit measures, inclusion frequency and model class reliance. Latent Dirichlet allocation results are also considered as a term of comparison. Our perspective is primarily empirical and our starting point is the analysis of two real case studies, though bootstrap replications of each dataset are considered. The first case study aims at explaining price variations based on the information contained in the description of items on sale on e-commerce platforms. The second regards open questions in surveys on satisfaction ratings. The case studies are different in nature and representative of different kinds of short texts, as, in one case, a concise descriptive text is considered, whereas, in the other case, the text expresses an opinion.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 1","pages":"69 - 99"},"PeriodicalIF":1.4,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-023-00472-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43416978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-14DOI: 10.1007/s10182-023-00470-2
Hirofumi Michimae, Takeshi Emura
{"title":"Correction: Bayesian ridge regression for survival data based on a vine copula-based prior","authors":"Hirofumi Michimae, Takeshi Emura","doi":"10.1007/s10182-023-00470-2","DOIUrl":"10.1007/s10182-023-00470-2","url":null,"abstract":"","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 3","pages":"703 - 703"},"PeriodicalIF":1.4,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135797364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}